00:00:00.001 Started by upstream project "autotest-nightly" build number 4126 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3488 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.076 Using shallow fetch with depth 1 00:00:00.076 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.076 > git --version # timeout=10 00:00:00.106 > git --version # 'git version 2.39.2' 00:00:00.106 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.148 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.148 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.373 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.385 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.398 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:03.398 > git config core.sparsecheckout # timeout=10 00:00:03.409 > git read-tree -mu HEAD # timeout=10 00:00:03.423 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:03.442 Commit message: "kid: add issue 3541" 00:00:03.442 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:03.556 [Pipeline] Start of Pipeline 00:00:03.569 [Pipeline] library 00:00:03.571 Loading library shm_lib@master 00:00:03.571 Library shm_lib@master is cached. Copying from home. 00:00:03.588 [Pipeline] node 00:00:03.608 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.609 [Pipeline] { 00:00:03.618 [Pipeline] catchError 00:00:03.619 [Pipeline] { 00:00:03.630 [Pipeline] wrap 00:00:03.639 [Pipeline] { 00:00:03.645 [Pipeline] stage 00:00:03.646 [Pipeline] { (Prologue) 00:00:03.662 [Pipeline] echo 00:00:03.664 Node: VM-host-WFP7 00:00:03.670 [Pipeline] cleanWs 00:00:03.680 [WS-CLEANUP] Deleting project workspace... 00:00:03.680 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.686 [WS-CLEANUP] done 00:00:03.879 [Pipeline] setCustomBuildProperty 00:00:03.965 [Pipeline] httpRequest 00:00:04.416 [Pipeline] echo 00:00:04.417 Sorcerer 10.211.164.101 is alive 00:00:04.426 [Pipeline] retry 00:00:04.428 [Pipeline] { 00:00:04.438 [Pipeline] httpRequest 00:00:04.442 HttpMethod: GET 00:00:04.443 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.444 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.444 Response Code: HTTP/1.1 200 OK 00:00:04.444 Success: Status code 200 is in the accepted range: 200,404 00:00:04.445 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.709 [Pipeline] } 00:00:04.728 [Pipeline] // retry 00:00:04.733 [Pipeline] sh 00:00:05.015 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:05.030 [Pipeline] httpRequest 00:00:05.704 [Pipeline] echo 00:00:05.706 Sorcerer 10.211.164.101 is alive 00:00:05.714 [Pipeline] retry 00:00:05.716 [Pipeline] { 00:00:05.730 [Pipeline] httpRequest 00:00:05.734 HttpMethod: GET 00:00:05.735 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.735 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.737 Response Code: HTTP/1.1 200 OK 00:00:05.737 Success: Status code 200 is in the accepted range: 200,404 00:00:05.738 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:26.224 [Pipeline] } 00:00:26.247 [Pipeline] // retry 00:00:26.255 [Pipeline] sh 00:00:26.543 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:29.100 [Pipeline] sh 00:00:29.385 + git -C spdk log --oneline -n5 00:00:29.385 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:29.385 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:29.385 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:29.385 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:29.385 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:29.405 [Pipeline] writeFile 00:00:29.419 [Pipeline] sh 00:00:29.705 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.719 [Pipeline] sh 00:00:30.004 + cat autorun-spdk.conf 00:00:30.004 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.004 SPDK_RUN_ASAN=1 00:00:30.004 SPDK_RUN_UBSAN=1 00:00:30.004 SPDK_TEST_RAID=1 00:00:30.004 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.012 RUN_NIGHTLY=1 00:00:30.014 [Pipeline] } 00:00:30.028 [Pipeline] // stage 00:00:30.043 [Pipeline] stage 00:00:30.045 [Pipeline] { (Run VM) 00:00:30.058 [Pipeline] sh 00:00:30.344 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:30.344 + echo 'Start stage prepare_nvme.sh' 00:00:30.344 Start stage prepare_nvme.sh 00:00:30.344 + [[ -n 0 ]] 00:00:30.344 + disk_prefix=ex0 00:00:30.344 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:30.344 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:30.344 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:30.344 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.344 ++ SPDK_RUN_ASAN=1 00:00:30.344 ++ SPDK_RUN_UBSAN=1 00:00:30.344 ++ SPDK_TEST_RAID=1 00:00:30.344 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.344 ++ RUN_NIGHTLY=1 00:00:30.344 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:30.344 + nvme_files=() 00:00:30.344 + declare -A nvme_files 00:00:30.344 + backend_dir=/var/lib/libvirt/images/backends 00:00:30.344 + nvme_files['nvme.img']=5G 00:00:30.344 + nvme_files['nvme-cmb.img']=5G 00:00:30.344 + nvme_files['nvme-multi0.img']=4G 00:00:30.344 + nvme_files['nvme-multi1.img']=4G 00:00:30.344 + nvme_files['nvme-multi2.img']=4G 00:00:30.344 + nvme_files['nvme-openstack.img']=8G 00:00:30.344 + nvme_files['nvme-zns.img']=5G 00:00:30.344 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:30.344 + (( SPDK_TEST_FTL == 1 )) 00:00:30.344 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:30.344 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:30.344 + for nvme in "${!nvme_files[@]}" 00:00:30.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:30.344 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.344 + for nvme in "${!nvme_files[@]}" 00:00:30.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:30.344 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.344 + for nvme in "${!nvme_files[@]}" 00:00:30.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:30.344 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:30.344 + for nvme in "${!nvme_files[@]}" 00:00:30.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:30.344 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.345 + for nvme in "${!nvme_files[@]}" 00:00:30.345 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:30.345 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.345 + for nvme in "${!nvme_files[@]}" 00:00:30.345 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:30.345 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.345 + for nvme in "${!nvme_files[@]}" 00:00:30.345 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:30.605 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.605 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:30.605 + echo 'End stage prepare_nvme.sh' 00:00:30.605 End stage prepare_nvme.sh 00:00:30.618 [Pipeline] sh 00:00:30.903 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.904 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:30.904 00:00:30.904 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:30.904 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:30.904 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:30.904 HELP=0 00:00:30.904 DRY_RUN=0 00:00:30.904 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:30.904 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.904 NVME_AUTO_CREATE=0 00:00:30.904 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:30.904 NVME_CMB=,, 00:00:30.904 NVME_PMR=,, 00:00:30.904 NVME_ZNS=,, 00:00:30.904 NVME_MS=,, 00:00:30.904 NVME_FDP=,, 00:00:30.904 SPDK_VAGRANT_DISTRO=fedora39 00:00:30.904 SPDK_VAGRANT_VMCPU=10 00:00:30.904 SPDK_VAGRANT_VMRAM=12288 00:00:30.904 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.904 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.904 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.904 SPDK_OPENSTACK_NETWORK=0 00:00:30.904 VAGRANT_PACKAGE_BOX=0 00:00:30.904 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:30.904 FORCE_DISTRO=true 00:00:30.904 VAGRANT_BOX_VERSION= 00:00:30.904 EXTRA_VAGRANTFILES= 00:00:30.904 NIC_MODEL=virtio 00:00:30.904 00:00:30.904 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:30.904 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:32.814 Bringing machine 'default' up with 'libvirt' provider... 00:00:33.074 ==> default: Creating image (snapshot of base box volume). 00:00:33.335 ==> default: Creating domain with the following settings... 00:00:33.335 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727512690_329a66ff44edb77104a9 00:00:33.335 ==> default: -- Domain type: kvm 00:00:33.335 ==> default: -- Cpus: 10 00:00:33.335 ==> default: -- Feature: acpi 00:00:33.335 ==> default: -- Feature: apic 00:00:33.335 ==> default: -- Feature: pae 00:00:33.335 ==> default: -- Memory: 12288M 00:00:33.335 ==> default: -- Memory Backing: hugepages: 00:00:33.335 ==> default: -- Management MAC: 00:00:33.335 ==> default: -- Loader: 00:00:33.335 ==> default: -- Nvram: 00:00:33.335 ==> default: -- Base box: spdk/fedora39 00:00:33.335 ==> default: -- Storage pool: default 00:00:33.335 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727512690_329a66ff44edb77104a9.img (20G) 00:00:33.335 ==> default: -- Volume Cache: default 00:00:33.335 ==> default: -- Kernel: 00:00:33.335 ==> default: -- Initrd: 00:00:33.335 ==> default: -- Graphics Type: vnc 00:00:33.335 ==> default: -- Graphics Port: -1 00:00:33.335 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.335 ==> default: -- Graphics Password: Not defined 00:00:33.335 ==> default: -- Video Type: cirrus 00:00:33.335 ==> default: -- Video VRAM: 9216 00:00:33.335 ==> default: -- Sound Type: 00:00:33.335 ==> default: -- Keymap: en-us 00:00:33.335 ==> default: -- TPM Path: 00:00:33.335 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.335 ==> default: -- Command line args: 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:33.335 ==> default: -> value=-drive, 00:00:33.335 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:33.335 ==> default: -> value=-drive, 00:00:33.335 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.335 ==> default: -> value=-drive, 00:00:33.335 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.335 ==> default: -> value=-drive, 00:00:33.335 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:33.335 ==> default: -> value=-device, 00:00:33.335 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.335 ==> default: Creating shared folders metadata... 00:00:33.335 ==> default: Starting domain. 00:00:34.719 ==> default: Waiting for domain to get an IP address... 00:00:52.874 ==> default: Waiting for SSH to become available... 00:00:52.874 ==> default: Configuring and enabling network interfaces... 00:00:58.157 default: SSH address: 192.168.121.195:22 00:00:58.157 default: SSH username: vagrant 00:00:58.157 default: SSH auth method: private key 00:01:00.700 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.827 ==> default: Mounting SSHFS shared folder... 00:01:11.368 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.368 ==> default: Checking Mount.. 00:01:12.752 ==> default: Folder Successfully Mounted! 00:01:12.752 ==> default: Running provisioner: file... 00:01:14.132 default: ~/.gitconfig => .gitconfig 00:01:14.392 00:01:14.392 SUCCESS! 00:01:14.392 00:01:14.392 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.392 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.392 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:14.392 00:01:14.402 [Pipeline] } 00:01:14.416 [Pipeline] // stage 00:01:14.425 [Pipeline] dir 00:01:14.426 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:14.427 [Pipeline] { 00:01:14.439 [Pipeline] catchError 00:01:14.441 [Pipeline] { 00:01:14.453 [Pipeline] sh 00:01:14.738 + vagrant ssh-config --host vagrant 00:01:14.738 + sed -ne /^Host/,$p 00:01:14.738 + tee ssh_conf 00:01:17.276 Host vagrant 00:01:17.276 HostName 192.168.121.195 00:01:17.276 User vagrant 00:01:17.276 Port 22 00:01:17.276 UserKnownHostsFile /dev/null 00:01:17.276 StrictHostKeyChecking no 00:01:17.276 PasswordAuthentication no 00:01:17.276 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.276 IdentitiesOnly yes 00:01:17.276 LogLevel FATAL 00:01:17.276 ForwardAgent yes 00:01:17.276 ForwardX11 yes 00:01:17.276 00:01:17.289 [Pipeline] withEnv 00:01:17.291 [Pipeline] { 00:01:17.303 [Pipeline] sh 00:01:17.587 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.587 source /etc/os-release 00:01:17.587 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.587 # Minimal, systemd-like check. 00:01:17.587 if [[ -e /.dockerenv ]]; then 00:01:17.587 # Clear garbage from the node's name: 00:01:17.587 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.587 # $HOSTNAME is the actual container id 00:01:17.587 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.587 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.587 # We can assume this is a mount from a host where container is running, 00:01:17.587 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.587 container="$(< /etc/hostname) ($agent)" 00:01:17.587 else 00:01:17.587 # Fallback 00:01:17.587 container=$agent 00:01:17.587 fi 00:01:17.587 fi 00:01:17.587 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.587 00:01:17.859 [Pipeline] } 00:01:17.875 [Pipeline] // withEnv 00:01:17.882 [Pipeline] setCustomBuildProperty 00:01:17.896 [Pipeline] stage 00:01:17.898 [Pipeline] { (Tests) 00:01:17.914 [Pipeline] sh 00:01:18.197 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.468 [Pipeline] sh 00:01:18.750 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.024 [Pipeline] timeout 00:01:19.024 Timeout set to expire in 1 hr 30 min 00:01:19.026 [Pipeline] { 00:01:19.039 [Pipeline] sh 00:01:19.322 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.892 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:19.904 [Pipeline] sh 00:01:20.188 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.465 [Pipeline] sh 00:01:20.753 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.031 [Pipeline] sh 00:01:21.319 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:21.579 ++ readlink -f spdk_repo 00:01:21.579 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.579 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.579 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.579 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.579 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.579 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.579 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.579 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:21.579 + cd /home/vagrant/spdk_repo 00:01:21.579 + source /etc/os-release 00:01:21.579 ++ NAME='Fedora Linux' 00:01:21.579 ++ VERSION='39 (Cloud Edition)' 00:01:21.579 ++ ID=fedora 00:01:21.579 ++ VERSION_ID=39 00:01:21.579 ++ VERSION_CODENAME= 00:01:21.579 ++ PLATFORM_ID=platform:f39 00:01:21.579 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.579 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.579 ++ LOGO=fedora-logo-icon 00:01:21.579 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.579 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.579 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.579 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.579 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.579 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.579 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.579 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.579 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.579 ++ SUPPORT_END=2024-11-12 00:01:21.579 ++ VARIANT='Cloud Edition' 00:01:21.579 ++ VARIANT_ID=cloud 00:01:21.579 + uname -a 00:01:21.579 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.579 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.150 Hugepages 00:01:22.150 node hugesize free / total 00:01:22.150 node0 1048576kB 0 / 0 00:01:22.150 node0 2048kB 0 / 0 00:01:22.150 00:01:22.150 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.150 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.150 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:22.410 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:22.410 + rm -f /tmp/spdk-ld-path 00:01:22.410 + source autorun-spdk.conf 00:01:22.410 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.410 ++ SPDK_RUN_ASAN=1 00:01:22.410 ++ SPDK_RUN_UBSAN=1 00:01:22.410 ++ SPDK_TEST_RAID=1 00:01:22.410 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.410 ++ RUN_NIGHTLY=1 00:01:22.410 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.410 + [[ -n '' ]] 00:01:22.410 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.410 + for M in /var/spdk/build-*-manifest.txt 00:01:22.410 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.410 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.410 + for M in /var/spdk/build-*-manifest.txt 00:01:22.410 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.410 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.410 + for M in /var/spdk/build-*-manifest.txt 00:01:22.410 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.410 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.410 ++ uname 00:01:22.410 + [[ Linux == \L\i\n\u\x ]] 00:01:22.410 + sudo dmesg -T 00:01:22.410 + sudo dmesg --clear 00:01:22.410 + dmesg_pid=5419 00:01:22.410 + [[ Fedora Linux == FreeBSD ]] 00:01:22.410 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.410 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.410 + sudo dmesg -Tw 00:01:22.410 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.410 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.410 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.410 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.410 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.410 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.410 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.410 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.410 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.410 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.410 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.410 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.410 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.410 Test configuration: 00:01:22.410 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.410 SPDK_RUN_ASAN=1 00:01:22.410 SPDK_RUN_UBSAN=1 00:01:22.410 SPDK_TEST_RAID=1 00:01:22.410 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.671 RUN_NIGHTLY=1 08:39:00 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:22.671 08:39:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.671 08:39:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.671 08:39:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.671 08:39:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.671 08:39:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.671 08:39:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.671 08:39:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.671 08:39:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.671 08:39:00 -- paths/export.sh@5 -- $ export PATH 00:01:22.671 08:39:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.671 08:39:00 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.671 08:39:00 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:22.671 08:39:00 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727512740.XXXXXX 00:01:22.671 08:39:00 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727512740.NEkeFB 00:01:22.671 08:39:00 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:22.671 08:39:00 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:22.671 08:39:00 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.671 08:39:00 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.671 08:39:00 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.671 08:39:00 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:22.671 08:39:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:22.671 08:39:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.671 08:39:00 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:22.671 08:39:00 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:22.671 08:39:00 -- pm/common@17 -- $ local monitor 00:01:22.671 08:39:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.671 08:39:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.671 08:39:00 -- pm/common@25 -- $ sleep 1 00:01:22.671 08:39:00 -- pm/common@21 -- $ date +%s 00:01:22.671 08:39:00 -- pm/common@21 -- $ date +%s 00:01:22.671 08:39:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727512740 00:01:22.671 08:39:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727512740 00:01:22.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727512740_collect-cpu-load.pm.log 00:01:22.671 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727512740_collect-vmstat.pm.log 00:01:23.612 08:39:01 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:23.612 08:39:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.612 08:39:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.612 08:39:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.612 08:39:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.612 Sat Sep 28 08:39:01 AM UTC 2024 00:01:23.612 08:39:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.612 v25.01-pre-17-g09cc66129 00:01:23.612 08:39:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:23.612 08:39:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:23.612 08:39:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.612 08:39:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.612 08:39:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.612 ************************************ 00:01:23.612 START TEST asan 00:01:23.612 ************************************ 00:01:23.612 using asan 00:01:23.612 08:39:01 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:23.612 00:01:23.612 real 0m0.001s 00:01:23.612 user 0m0.000s 00:01:23.612 sys 0m0.001s 00:01:23.612 08:39:01 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:23.612 08:39:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.612 ************************************ 00:01:23.612 END TEST asan 00:01:23.612 ************************************ 00:01:23.873 08:39:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.873 08:39:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.873 08:39:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:23.873 08:39:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:23.873 08:39:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.873 ************************************ 00:01:23.873 START TEST ubsan 00:01:23.873 ************************************ 00:01:23.873 using ubsan 00:01:23.873 08:39:01 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:23.873 00:01:23.873 real 0m0.000s 00:01:23.873 user 0m0.000s 00:01:23.873 sys 0m0.000s 00:01:23.873 08:39:01 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:23.873 08:39:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.873 ************************************ 00:01:23.873 END TEST ubsan 00:01:23.873 ************************************ 00:01:23.873 08:39:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.873 08:39:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.873 08:39:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.873 08:39:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:23.873 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.873 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.443 Using 'verbs' RDMA provider 00:01:40.756 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:58.862 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:58.862 Creating mk/config.mk...done. 00:01:58.862 Creating mk/cc.flags.mk...done. 00:01:58.862 Type 'make' to build. 00:01:58.862 08:39:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:58.862 08:39:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:58.862 08:39:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:58.862 08:39:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.862 ************************************ 00:01:58.862 START TEST make 00:01:58.862 ************************************ 00:01:58.862 08:39:35 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:58.862 make[1]: Nothing to be done for 'all'. 00:02:07.012 The Meson build system 00:02:07.012 Version: 1.5.0 00:02:07.012 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.012 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.012 Build type: native build 00:02:07.012 Program cat found: YES (/usr/bin/cat) 00:02:07.012 Project name: DPDK 00:02:07.012 Project version: 24.03.0 00:02:07.012 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.012 C linker for the host machine: cc ld.bfd 2.40-14 00:02:07.012 Host machine cpu family: x86_64 00:02:07.012 Host machine cpu: x86_64 00:02:07.012 Message: ## Building in Developer Mode ## 00:02:07.012 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.012 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.012 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.012 Program python3 found: YES (/usr/bin/python3) 00:02:07.012 Program cat found: YES (/usr/bin/cat) 00:02:07.012 Compiler for C supports arguments -march=native: YES 00:02:07.012 Checking for size of "void *" : 8 00:02:07.012 Checking for size of "void *" : 8 (cached) 00:02:07.012 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:07.012 Library m found: YES 00:02:07.012 Library numa found: YES 00:02:07.012 Has header "numaif.h" : YES 00:02:07.012 Library fdt found: NO 00:02:07.012 Library execinfo found: NO 00:02:07.012 Has header "execinfo.h" : YES 00:02:07.012 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.012 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.012 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.012 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.012 Run-time dependency openssl found: YES 3.1.1 00:02:07.012 Run-time dependency libpcap found: YES 1.10.4 00:02:07.012 Has header "pcap.h" with dependency libpcap: YES 00:02:07.012 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.012 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.012 Compiler for C supports arguments -Wformat: YES 00:02:07.012 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.012 Compiler for C supports arguments -Wformat-security: NO 00:02:07.012 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.012 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.012 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.012 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.012 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.012 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.012 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.012 Compiler for C supports arguments -Wundef: YES 00:02:07.012 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.012 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.012 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.012 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.012 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.012 Program objdump found: YES (/usr/bin/objdump) 00:02:07.012 Compiler for C supports arguments -mavx512f: YES 00:02:07.012 Checking if "AVX512 checking" compiles: YES 00:02:07.012 Fetching value of define "__SSE4_2__" : 1 00:02:07.012 Fetching value of define "__AES__" : 1 00:02:07.012 Fetching value of define "__AVX__" : 1 00:02:07.012 Fetching value of define "__AVX2__" : 1 00:02:07.012 Fetching value of define "__AVX512BW__" : 1 00:02:07.012 Fetching value of define "__AVX512CD__" : 1 00:02:07.012 Fetching value of define "__AVX512DQ__" : 1 00:02:07.012 Fetching value of define "__AVX512F__" : 1 00:02:07.012 Fetching value of define "__AVX512VL__" : 1 00:02:07.012 Fetching value of define "__PCLMUL__" : 1 00:02:07.012 Fetching value of define "__RDRND__" : 1 00:02:07.012 Fetching value of define "__RDSEED__" : 1 00:02:07.012 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.012 Fetching value of define "__znver1__" : (undefined) 00:02:07.012 Fetching value of define "__znver2__" : (undefined) 00:02:07.012 Fetching value of define "__znver3__" : (undefined) 00:02:07.012 Fetching value of define "__znver4__" : (undefined) 00:02:07.012 Library asan found: YES 00:02:07.012 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.012 Message: lib/log: Defining dependency "log" 00:02:07.012 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.012 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.012 Library rt found: YES 00:02:07.012 Checking for function "getentropy" : NO 00:02:07.012 Message: lib/eal: Defining dependency "eal" 00:02:07.012 Message: lib/ring: Defining dependency "ring" 00:02:07.012 Message: lib/rcu: Defining dependency "rcu" 00:02:07.012 Message: lib/mempool: Defining dependency "mempool" 00:02:07.012 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.012 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.012 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.012 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.012 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.012 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.012 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.012 Compiler for C supports arguments -mpclmul: YES 00:02:07.012 Compiler for C supports arguments -maes: YES 00:02:07.012 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.012 Compiler for C supports arguments -mavx512bw: YES 00:02:07.012 Compiler for C supports arguments -mavx512dq: YES 00:02:07.012 Compiler for C supports arguments -mavx512vl: YES 00:02:07.012 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.012 Compiler for C supports arguments -mavx2: YES 00:02:07.012 Compiler for C supports arguments -mavx: YES 00:02:07.012 Message: lib/net: Defining dependency "net" 00:02:07.012 Message: lib/meter: Defining dependency "meter" 00:02:07.012 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.012 Message: lib/pci: Defining dependency "pci" 00:02:07.012 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.012 Message: lib/hash: Defining dependency "hash" 00:02:07.012 Message: lib/timer: Defining dependency "timer" 00:02:07.012 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.012 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.012 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.012 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.012 Message: lib/power: Defining dependency "power" 00:02:07.012 Message: lib/reorder: Defining dependency "reorder" 00:02:07.012 Message: lib/security: Defining dependency "security" 00:02:07.012 Has header "linux/userfaultfd.h" : YES 00:02:07.012 Has header "linux/vduse.h" : YES 00:02:07.012 Message: lib/vhost: Defining dependency "vhost" 00:02:07.012 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.012 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.012 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.012 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.012 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.012 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.012 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.012 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.012 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.012 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.012 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.012 Configuring doxy-api-html.conf using configuration 00:02:07.012 Configuring doxy-api-man.conf using configuration 00:02:07.012 Program mandb found: YES (/usr/bin/mandb) 00:02:07.012 Program sphinx-build found: NO 00:02:07.012 Configuring rte_build_config.h using configuration 00:02:07.012 Message: 00:02:07.012 ================= 00:02:07.012 Applications Enabled 00:02:07.012 ================= 00:02:07.012 00:02:07.012 apps: 00:02:07.012 00:02:07.012 00:02:07.012 Message: 00:02:07.012 ================= 00:02:07.012 Libraries Enabled 00:02:07.012 ================= 00:02:07.012 00:02:07.012 libs: 00:02:07.012 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.012 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.012 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.012 00:02:07.012 Message: 00:02:07.012 =============== 00:02:07.012 Drivers Enabled 00:02:07.012 =============== 00:02:07.012 00:02:07.012 common: 00:02:07.012 00:02:07.012 bus: 00:02:07.012 pci, vdev, 00:02:07.012 mempool: 00:02:07.012 ring, 00:02:07.012 dma: 00:02:07.012 00:02:07.012 net: 00:02:07.012 00:02:07.012 crypto: 00:02:07.012 00:02:07.012 compress: 00:02:07.012 00:02:07.012 vdpa: 00:02:07.012 00:02:07.012 00:02:07.012 Message: 00:02:07.012 ================= 00:02:07.012 Content Skipped 00:02:07.012 ================= 00:02:07.012 00:02:07.012 apps: 00:02:07.012 dumpcap: explicitly disabled via build config 00:02:07.012 graph: explicitly disabled via build config 00:02:07.012 pdump: explicitly disabled via build config 00:02:07.012 proc-info: explicitly disabled via build config 00:02:07.012 test-acl: explicitly disabled via build config 00:02:07.012 test-bbdev: explicitly disabled via build config 00:02:07.012 test-cmdline: explicitly disabled via build config 00:02:07.013 test-compress-perf: explicitly disabled via build config 00:02:07.013 test-crypto-perf: explicitly disabled via build config 00:02:07.013 test-dma-perf: explicitly disabled via build config 00:02:07.013 test-eventdev: explicitly disabled via build config 00:02:07.013 test-fib: explicitly disabled via build config 00:02:07.013 test-flow-perf: explicitly disabled via build config 00:02:07.013 test-gpudev: explicitly disabled via build config 00:02:07.013 test-mldev: explicitly disabled via build config 00:02:07.013 test-pipeline: explicitly disabled via build config 00:02:07.013 test-pmd: explicitly disabled via build config 00:02:07.013 test-regex: explicitly disabled via build config 00:02:07.013 test-sad: explicitly disabled via build config 00:02:07.013 test-security-perf: explicitly disabled via build config 00:02:07.013 00:02:07.013 libs: 00:02:07.013 argparse: explicitly disabled via build config 00:02:07.013 metrics: explicitly disabled via build config 00:02:07.013 acl: explicitly disabled via build config 00:02:07.013 bbdev: explicitly disabled via build config 00:02:07.013 bitratestats: explicitly disabled via build config 00:02:07.013 bpf: explicitly disabled via build config 00:02:07.013 cfgfile: explicitly disabled via build config 00:02:07.013 distributor: explicitly disabled via build config 00:02:07.013 efd: explicitly disabled via build config 00:02:07.013 eventdev: explicitly disabled via build config 00:02:07.013 dispatcher: explicitly disabled via build config 00:02:07.013 gpudev: explicitly disabled via build config 00:02:07.013 gro: explicitly disabled via build config 00:02:07.013 gso: explicitly disabled via build config 00:02:07.013 ip_frag: explicitly disabled via build config 00:02:07.013 jobstats: explicitly disabled via build config 00:02:07.013 latencystats: explicitly disabled via build config 00:02:07.013 lpm: explicitly disabled via build config 00:02:07.013 member: explicitly disabled via build config 00:02:07.013 pcapng: explicitly disabled via build config 00:02:07.013 rawdev: explicitly disabled via build config 00:02:07.013 regexdev: explicitly disabled via build config 00:02:07.013 mldev: explicitly disabled via build config 00:02:07.013 rib: explicitly disabled via build config 00:02:07.013 sched: explicitly disabled via build config 00:02:07.013 stack: explicitly disabled via build config 00:02:07.013 ipsec: explicitly disabled via build config 00:02:07.013 pdcp: explicitly disabled via build config 00:02:07.013 fib: explicitly disabled via build config 00:02:07.013 port: explicitly disabled via build config 00:02:07.013 pdump: explicitly disabled via build config 00:02:07.013 table: explicitly disabled via build config 00:02:07.013 pipeline: explicitly disabled via build config 00:02:07.013 graph: explicitly disabled via build config 00:02:07.013 node: explicitly disabled via build config 00:02:07.013 00:02:07.013 drivers: 00:02:07.013 common/cpt: not in enabled drivers build config 00:02:07.013 common/dpaax: not in enabled drivers build config 00:02:07.013 common/iavf: not in enabled drivers build config 00:02:07.013 common/idpf: not in enabled drivers build config 00:02:07.013 common/ionic: not in enabled drivers build config 00:02:07.013 common/mvep: not in enabled drivers build config 00:02:07.013 common/octeontx: not in enabled drivers build config 00:02:07.013 bus/auxiliary: not in enabled drivers build config 00:02:07.013 bus/cdx: not in enabled drivers build config 00:02:07.013 bus/dpaa: not in enabled drivers build config 00:02:07.013 bus/fslmc: not in enabled drivers build config 00:02:07.013 bus/ifpga: not in enabled drivers build config 00:02:07.013 bus/platform: not in enabled drivers build config 00:02:07.013 bus/uacce: not in enabled drivers build config 00:02:07.013 bus/vmbus: not in enabled drivers build config 00:02:07.013 common/cnxk: not in enabled drivers build config 00:02:07.013 common/mlx5: not in enabled drivers build config 00:02:07.013 common/nfp: not in enabled drivers build config 00:02:07.013 common/nitrox: not in enabled drivers build config 00:02:07.013 common/qat: not in enabled drivers build config 00:02:07.013 common/sfc_efx: not in enabled drivers build config 00:02:07.013 mempool/bucket: not in enabled drivers build config 00:02:07.013 mempool/cnxk: not in enabled drivers build config 00:02:07.013 mempool/dpaa: not in enabled drivers build config 00:02:07.013 mempool/dpaa2: not in enabled drivers build config 00:02:07.013 mempool/octeontx: not in enabled drivers build config 00:02:07.013 mempool/stack: not in enabled drivers build config 00:02:07.013 dma/cnxk: not in enabled drivers build config 00:02:07.013 dma/dpaa: not in enabled drivers build config 00:02:07.013 dma/dpaa2: not in enabled drivers build config 00:02:07.013 dma/hisilicon: not in enabled drivers build config 00:02:07.013 dma/idxd: not in enabled drivers build config 00:02:07.013 dma/ioat: not in enabled drivers build config 00:02:07.013 dma/skeleton: not in enabled drivers build config 00:02:07.013 net/af_packet: not in enabled drivers build config 00:02:07.013 net/af_xdp: not in enabled drivers build config 00:02:07.013 net/ark: not in enabled drivers build config 00:02:07.013 net/atlantic: not in enabled drivers build config 00:02:07.013 net/avp: not in enabled drivers build config 00:02:07.013 net/axgbe: not in enabled drivers build config 00:02:07.013 net/bnx2x: not in enabled drivers build config 00:02:07.013 net/bnxt: not in enabled drivers build config 00:02:07.013 net/bonding: not in enabled drivers build config 00:02:07.013 net/cnxk: not in enabled drivers build config 00:02:07.013 net/cpfl: not in enabled drivers build config 00:02:07.013 net/cxgbe: not in enabled drivers build config 00:02:07.013 net/dpaa: not in enabled drivers build config 00:02:07.013 net/dpaa2: not in enabled drivers build config 00:02:07.013 net/e1000: not in enabled drivers build config 00:02:07.013 net/ena: not in enabled drivers build config 00:02:07.013 net/enetc: not in enabled drivers build config 00:02:07.013 net/enetfec: not in enabled drivers build config 00:02:07.013 net/enic: not in enabled drivers build config 00:02:07.013 net/failsafe: not in enabled drivers build config 00:02:07.013 net/fm10k: not in enabled drivers build config 00:02:07.013 net/gve: not in enabled drivers build config 00:02:07.013 net/hinic: not in enabled drivers build config 00:02:07.013 net/hns3: not in enabled drivers build config 00:02:07.013 net/i40e: not in enabled drivers build config 00:02:07.013 net/iavf: not in enabled drivers build config 00:02:07.013 net/ice: not in enabled drivers build config 00:02:07.013 net/idpf: not in enabled drivers build config 00:02:07.013 net/igc: not in enabled drivers build config 00:02:07.013 net/ionic: not in enabled drivers build config 00:02:07.013 net/ipn3ke: not in enabled drivers build config 00:02:07.013 net/ixgbe: not in enabled drivers build config 00:02:07.013 net/mana: not in enabled drivers build config 00:02:07.013 net/memif: not in enabled drivers build config 00:02:07.013 net/mlx4: not in enabled drivers build config 00:02:07.013 net/mlx5: not in enabled drivers build config 00:02:07.013 net/mvneta: not in enabled drivers build config 00:02:07.013 net/mvpp2: not in enabled drivers build config 00:02:07.013 net/netvsc: not in enabled drivers build config 00:02:07.013 net/nfb: not in enabled drivers build config 00:02:07.013 net/nfp: not in enabled drivers build config 00:02:07.013 net/ngbe: not in enabled drivers build config 00:02:07.013 net/null: not in enabled drivers build config 00:02:07.013 net/octeontx: not in enabled drivers build config 00:02:07.013 net/octeon_ep: not in enabled drivers build config 00:02:07.013 net/pcap: not in enabled drivers build config 00:02:07.013 net/pfe: not in enabled drivers build config 00:02:07.013 net/qede: not in enabled drivers build config 00:02:07.013 net/ring: not in enabled drivers build config 00:02:07.013 net/sfc: not in enabled drivers build config 00:02:07.013 net/softnic: not in enabled drivers build config 00:02:07.013 net/tap: not in enabled drivers build config 00:02:07.013 net/thunderx: not in enabled drivers build config 00:02:07.013 net/txgbe: not in enabled drivers build config 00:02:07.013 net/vdev_netvsc: not in enabled drivers build config 00:02:07.013 net/vhost: not in enabled drivers build config 00:02:07.013 net/virtio: not in enabled drivers build config 00:02:07.013 net/vmxnet3: not in enabled drivers build config 00:02:07.013 raw/*: missing internal dependency, "rawdev" 00:02:07.013 crypto/armv8: not in enabled drivers build config 00:02:07.013 crypto/bcmfs: not in enabled drivers build config 00:02:07.013 crypto/caam_jr: not in enabled drivers build config 00:02:07.013 crypto/ccp: not in enabled drivers build config 00:02:07.013 crypto/cnxk: not in enabled drivers build config 00:02:07.013 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.013 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.013 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.013 crypto/mlx5: not in enabled drivers build config 00:02:07.013 crypto/mvsam: not in enabled drivers build config 00:02:07.013 crypto/nitrox: not in enabled drivers build config 00:02:07.013 crypto/null: not in enabled drivers build config 00:02:07.013 crypto/octeontx: not in enabled drivers build config 00:02:07.013 crypto/openssl: not in enabled drivers build config 00:02:07.013 crypto/scheduler: not in enabled drivers build config 00:02:07.013 crypto/uadk: not in enabled drivers build config 00:02:07.013 crypto/virtio: not in enabled drivers build config 00:02:07.013 compress/isal: not in enabled drivers build config 00:02:07.013 compress/mlx5: not in enabled drivers build config 00:02:07.013 compress/nitrox: not in enabled drivers build config 00:02:07.013 compress/octeontx: not in enabled drivers build config 00:02:07.013 compress/zlib: not in enabled drivers build config 00:02:07.013 regex/*: missing internal dependency, "regexdev" 00:02:07.013 ml/*: missing internal dependency, "mldev" 00:02:07.013 vdpa/ifc: not in enabled drivers build config 00:02:07.013 vdpa/mlx5: not in enabled drivers build config 00:02:07.013 vdpa/nfp: not in enabled drivers build config 00:02:07.013 vdpa/sfc: not in enabled drivers build config 00:02:07.013 event/*: missing internal dependency, "eventdev" 00:02:07.013 baseband/*: missing internal dependency, "bbdev" 00:02:07.013 gpu/*: missing internal dependency, "gpudev" 00:02:07.013 00:02:07.013 00:02:07.272 Build targets in project: 85 00:02:07.272 00:02:07.272 DPDK 24.03.0 00:02:07.272 00:02:07.272 User defined options 00:02:07.272 buildtype : debug 00:02:07.272 default_library : shared 00:02:07.272 libdir : lib 00:02:07.272 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.272 b_sanitize : address 00:02:07.272 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.272 c_link_args : 00:02:07.272 cpu_instruction_set: native 00:02:07.272 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.272 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.272 enable_docs : false 00:02:07.272 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:07.272 enable_kmods : false 00:02:07.272 max_lcores : 128 00:02:07.272 tests : false 00:02:07.272 00:02:07.272 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.840 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.840 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.840 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.840 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.840 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.840 [5/268] Linking static target lib/librte_log.a 00:02:07.840 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.098 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.098 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.356 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.356 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.356 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.356 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.356 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.356 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.356 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.356 [16/268] Linking static target lib/librte_telemetry.a 00:02:08.614 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.615 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.615 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.873 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.873 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.873 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.873 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.873 [24/268] Linking target lib/librte_log.so.24.1 00:02:08.873 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.873 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.873 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.132 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.132 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.132 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.132 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.132 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.132 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.391 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.391 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.391 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.391 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.391 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.391 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.391 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.391 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.391 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.391 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.650 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.650 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.650 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.909 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.909 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.909 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.168 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.168 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.168 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.168 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.168 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.168 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.168 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.168 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.427 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.427 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.427 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.427 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.427 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.685 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.685 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.685 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.685 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.685 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.685 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.944 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.944 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.203 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.203 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.203 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.203 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.203 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.203 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.203 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.462 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.462 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.462 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.462 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.462 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.721 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.721 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.721 [85/268] Linking static target lib/librte_ring.a 00:02:11.721 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.721 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.721 [88/268] Linking static target lib/librte_eal.a 00:02:11.721 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.980 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.980 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.980 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.980 [93/268] Linking static target lib/librte_mempool.a 00:02:12.239 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:12.239 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.239 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:12.239 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.239 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.239 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.239 [100/268] Linking static target lib/librte_rcu.a 00:02:12.239 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.239 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.498 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.498 [104/268] Linking static target lib/librte_mbuf.a 00:02:12.498 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.498 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.498 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.498 [108/268] Linking static target lib/librte_meter.a 00:02:12.757 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.757 [110/268] Linking static target lib/librte_net.a 00:02:12.757 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.757 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.757 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.016 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.016 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.016 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.275 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.275 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.275 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.534 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.534 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.534 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.534 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.793 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.793 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.793 [126/268] Linking static target lib/librte_pci.a 00:02:13.793 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:14.052 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.052 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:14.052 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.052 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.052 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.311 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.311 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.311 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.311 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:14.311 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.311 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.311 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.311 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.311 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.311 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.311 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.311 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.570 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.570 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.570 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.570 [148/268] Linking static target lib/librte_cmdline.a 00:02:14.829 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.829 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.829 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.089 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.089 [153/268] Linking static target lib/librte_timer.a 00:02:15.089 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.089 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.348 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.348 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.348 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.348 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.348 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.348 [161/268] Linking static target lib/librte_compressdev.a 00:02:15.348 [162/268] Linking static target lib/librte_hash.a 00:02:15.348 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.348 [164/268] Linking static target lib/librte_ethdev.a 00:02:15.606 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.607 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.607 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.607 [168/268] Linking static target lib/librte_dmadev.a 00:02:15.865 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.865 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.865 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.124 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.124 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.124 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.382 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.382 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.382 [177/268] Linking static target lib/librte_cryptodev.a 00:02:16.382 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.382 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.382 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.382 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.382 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.640 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.640 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.640 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.640 [186/268] Linking static target lib/librte_power.a 00:02:16.899 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.899 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.899 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.158 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.158 [191/268] Linking static target lib/librte_reorder.a 00:02:17.158 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.158 [193/268] Linking static target lib/librte_security.a 00:02:17.418 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.677 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.677 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.677 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.936 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.936 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.936 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.194 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.194 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.194 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.194 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.453 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.453 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.453 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.453 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.453 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.712 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.712 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.712 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.712 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.712 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.712 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.712 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.971 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.971 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.971 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.971 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.971 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:18.971 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.971 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.971 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.971 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.229 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.229 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.610 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.992 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.992 [230/268] Linking target lib/librte_eal.so.24.1 00:02:21.992 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.992 [232/268] Linking target lib/librte_meter.so.24.1 00:02:21.992 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.992 [234/268] Linking target lib/librte_pci.so.24.1 00:02:21.992 [235/268] Linking target lib/librte_ring.so.24.1 00:02:21.992 [236/268] Linking target lib/librte_timer.so.24.1 00:02:21.992 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:22.252 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:22.252 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:22.252 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:22.252 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:22.252 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:22.252 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:22.252 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:22.252 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:22.252 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.252 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.512 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.512 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.512 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.512 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.512 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.512 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:22.512 [254/268] Linking target lib/librte_net.so.24.1 00:02:22.772 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:22.772 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:22.772 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:22.772 [258/268] Linking target lib/librte_hash.so.24.1 00:02:22.772 [259/268] Linking target lib/librte_security.so.24.1 00:02:22.772 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:23.711 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.711 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:23.971 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.971 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.971 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:23.971 [266/268] Linking static target lib/librte_vhost.a 00:02:26.511 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.511 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.511 INFO: autodetecting backend as ninja 00:02:26.511 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:44.614 CC lib/ut/ut.o 00:02:44.614 CC lib/log/log.o 00:02:44.614 CC lib/log/log_deprecated.o 00:02:44.614 CC lib/log/log_flags.o 00:02:44.614 CC lib/ut_mock/mock.o 00:02:44.614 LIB libspdk_ut.a 00:02:44.614 LIB libspdk_ut_mock.a 00:02:44.614 LIB libspdk_log.a 00:02:44.614 SO libspdk_ut.so.2.0 00:02:44.614 SO libspdk_ut_mock.so.6.0 00:02:44.614 SO libspdk_log.so.7.0 00:02:44.614 SYMLINK libspdk_ut.so 00:02:44.614 SYMLINK libspdk_ut_mock.so 00:02:44.614 SYMLINK libspdk_log.so 00:02:44.614 CC lib/dma/dma.o 00:02:44.614 CC lib/ioat/ioat.o 00:02:44.614 CC lib/util/base64.o 00:02:44.614 CC lib/util/bit_array.o 00:02:44.614 CC lib/util/cpuset.o 00:02:44.614 CC lib/util/crc16.o 00:02:44.614 CC lib/util/crc32.o 00:02:44.614 CC lib/util/crc32c.o 00:02:44.614 CXX lib/trace_parser/trace.o 00:02:44.614 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.614 CC lib/util/crc32_ieee.o 00:02:44.615 CC lib/util/crc64.o 00:02:44.615 CC lib/util/dif.o 00:02:44.615 LIB libspdk_dma.a 00:02:44.615 SO libspdk_dma.so.5.0 00:02:44.615 CC lib/util/fd.o 00:02:44.615 CC lib/util/fd_group.o 00:02:44.615 CC lib/util/file.o 00:02:44.615 SYMLINK libspdk_dma.so 00:02:44.615 CC lib/util/hexlify.o 00:02:44.615 LIB libspdk_ioat.a 00:02:44.615 CC lib/util/iov.o 00:02:44.615 SO libspdk_ioat.so.7.0 00:02:44.615 CC lib/util/math.o 00:02:44.615 SYMLINK libspdk_ioat.so 00:02:44.615 CC lib/vfio_user/host/vfio_user.o 00:02:44.615 CC lib/util/net.o 00:02:44.615 CC lib/util/pipe.o 00:02:44.615 CC lib/util/strerror_tls.o 00:02:44.615 CC lib/util/string.o 00:02:44.615 CC lib/util/uuid.o 00:02:44.615 CC lib/util/xor.o 00:02:44.615 CC lib/util/zipf.o 00:02:44.615 CC lib/util/md5.o 00:02:44.615 LIB libspdk_vfio_user.a 00:02:44.615 SO libspdk_vfio_user.so.5.0 00:02:44.615 SYMLINK libspdk_vfio_user.so 00:02:44.615 LIB libspdk_util.a 00:02:44.615 SO libspdk_util.so.10.0 00:02:44.615 LIB libspdk_trace_parser.a 00:02:44.615 SYMLINK libspdk_util.so 00:02:44.615 SO libspdk_trace_parser.so.6.0 00:02:44.615 SYMLINK libspdk_trace_parser.so 00:02:44.615 CC lib/env_dpdk/env.o 00:02:44.615 CC lib/env_dpdk/memory.o 00:02:44.615 CC lib/rdma_utils/rdma_utils.o 00:02:44.615 CC lib/env_dpdk/pci.o 00:02:44.615 CC lib/env_dpdk/init.o 00:02:44.615 CC lib/json/json_parse.o 00:02:44.615 CC lib/idxd/idxd.o 00:02:44.615 CC lib/conf/conf.o 00:02:44.615 CC lib/vmd/vmd.o 00:02:44.615 CC lib/rdma_provider/common.o 00:02:44.615 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.615 CC lib/json/json_util.o 00:02:44.615 LIB libspdk_conf.a 00:02:44.615 SO libspdk_conf.so.6.0 00:02:44.615 LIB libspdk_rdma_utils.a 00:02:44.615 SO libspdk_rdma_utils.so.1.0 00:02:44.615 SYMLINK libspdk_conf.so 00:02:44.615 CC lib/vmd/led.o 00:02:44.615 SYMLINK libspdk_rdma_utils.so 00:02:44.615 CC lib/json/json_write.o 00:02:44.615 CC lib/env_dpdk/threads.o 00:02:44.615 LIB libspdk_rdma_provider.a 00:02:44.615 CC lib/env_dpdk/pci_ioat.o 00:02:44.615 SO libspdk_rdma_provider.so.6.0 00:02:44.615 SYMLINK libspdk_rdma_provider.so 00:02:44.615 CC lib/idxd/idxd_user.o 00:02:44.615 CC lib/idxd/idxd_kernel.o 00:02:44.615 CC lib/env_dpdk/pci_virtio.o 00:02:44.615 CC lib/env_dpdk/pci_vmd.o 00:02:44.615 CC lib/env_dpdk/pci_idxd.o 00:02:44.615 CC lib/env_dpdk/pci_event.o 00:02:44.615 CC lib/env_dpdk/sigbus_handler.o 00:02:44.615 CC lib/env_dpdk/pci_dpdk.o 00:02:44.615 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.615 LIB libspdk_json.a 00:02:44.615 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.615 SO libspdk_json.so.6.0 00:02:44.615 LIB libspdk_vmd.a 00:02:44.615 SO libspdk_vmd.so.6.0 00:02:44.615 LIB libspdk_idxd.a 00:02:44.615 SYMLINK libspdk_json.so 00:02:44.615 SO libspdk_idxd.so.12.1 00:02:44.615 SYMLINK libspdk_vmd.so 00:02:44.615 SYMLINK libspdk_idxd.so 00:02:44.615 CC lib/jsonrpc/jsonrpc_server.o 00:02:44.615 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:44.615 CC lib/jsonrpc/jsonrpc_client.o 00:02:44.615 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.615 LIB libspdk_jsonrpc.a 00:02:44.874 SO libspdk_jsonrpc.so.6.0 00:02:44.874 SYMLINK libspdk_jsonrpc.so 00:02:44.874 LIB libspdk_env_dpdk.a 00:02:45.134 SO libspdk_env_dpdk.so.15.0 00:02:45.134 SYMLINK libspdk_env_dpdk.so 00:02:45.134 CC lib/rpc/rpc.o 00:02:45.394 LIB libspdk_rpc.a 00:02:45.394 SO libspdk_rpc.so.6.0 00:02:45.654 SYMLINK libspdk_rpc.so 00:02:45.913 CC lib/trace/trace.o 00:02:45.913 CC lib/trace/trace_rpc.o 00:02:45.913 CC lib/trace/trace_flags.o 00:02:45.913 CC lib/keyring/keyring.o 00:02:45.913 CC lib/keyring/keyring_rpc.o 00:02:45.913 CC lib/notify/notify.o 00:02:45.913 CC lib/notify/notify_rpc.o 00:02:46.173 LIB libspdk_notify.a 00:02:46.173 SO libspdk_notify.so.6.0 00:02:46.173 LIB libspdk_keyring.a 00:02:46.173 LIB libspdk_trace.a 00:02:46.173 SO libspdk_trace.so.11.0 00:02:46.173 SYMLINK libspdk_notify.so 00:02:46.173 SO libspdk_keyring.so.2.0 00:02:46.173 SYMLINK libspdk_keyring.so 00:02:46.173 SYMLINK libspdk_trace.so 00:02:46.744 CC lib/thread/thread.o 00:02:46.744 CC lib/thread/iobuf.o 00:02:46.744 CC lib/sock/sock.o 00:02:46.744 CC lib/sock/sock_rpc.o 00:02:47.006 LIB libspdk_sock.a 00:02:47.273 SO libspdk_sock.so.10.0 00:02:47.273 SYMLINK libspdk_sock.so 00:02:47.545 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.545 CC lib/nvme/nvme_ctrlr.o 00:02:47.545 CC lib/nvme/nvme_fabric.o 00:02:47.545 CC lib/nvme/nvme_ns_cmd.o 00:02:47.545 CC lib/nvme/nvme_ns.o 00:02:47.545 CC lib/nvme/nvme_pcie_common.o 00:02:47.545 CC lib/nvme/nvme_pcie.o 00:02:47.545 CC lib/nvme/nvme_qpair.o 00:02:47.545 CC lib/nvme/nvme.o 00:02:48.121 LIB libspdk_thread.a 00:02:48.121 SO libspdk_thread.so.10.1 00:02:48.121 CC lib/nvme/nvme_quirks.o 00:02:48.121 SYMLINK libspdk_thread.so 00:02:48.121 CC lib/nvme/nvme_transport.o 00:02:48.380 CC lib/nvme/nvme_discovery.o 00:02:48.380 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.380 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.380 CC lib/nvme/nvme_tcp.o 00:02:48.380 CC lib/nvme/nvme_opal.o 00:02:48.640 CC lib/nvme/nvme_io_msg.o 00:02:48.640 CC lib/nvme/nvme_poll_group.o 00:02:48.640 CC lib/nvme/nvme_zns.o 00:02:48.900 CC lib/nvme/nvme_stubs.o 00:02:48.900 CC lib/nvme/nvme_auth.o 00:02:48.900 CC lib/nvme/nvme_cuse.o 00:02:48.900 CC lib/accel/accel.o 00:02:48.900 CC lib/nvme/nvme_rdma.o 00:02:49.160 CC lib/accel/accel_rpc.o 00:02:49.160 CC lib/accel/accel_sw.o 00:02:49.420 CC lib/blob/blobstore.o 00:02:49.420 CC lib/blob/request.o 00:02:49.420 CC lib/init/json_config.o 00:02:49.680 CC lib/init/subsystem.o 00:02:49.680 CC lib/virtio/virtio.o 00:02:49.680 CC lib/blob/zeroes.o 00:02:49.680 CC lib/blob/blob_bs_dev.o 00:02:49.680 CC lib/virtio/virtio_vhost_user.o 00:02:49.680 CC lib/init/subsystem_rpc.o 00:02:49.939 CC lib/virtio/virtio_vfio_user.o 00:02:49.939 CC lib/virtio/virtio_pci.o 00:02:49.939 CC lib/init/rpc.o 00:02:49.939 LIB libspdk_init.a 00:02:50.199 LIB libspdk_accel.a 00:02:50.199 SO libspdk_init.so.6.0 00:02:50.199 CC lib/fsdev/fsdev_io.o 00:02:50.199 CC lib/fsdev/fsdev.o 00:02:50.199 CC lib/fsdev/fsdev_rpc.o 00:02:50.199 LIB libspdk_virtio.a 00:02:50.199 SO libspdk_accel.so.16.0 00:02:50.199 SYMLINK libspdk_init.so 00:02:50.199 SO libspdk_virtio.so.7.0 00:02:50.199 SYMLINK libspdk_accel.so 00:02:50.199 SYMLINK libspdk_virtio.so 00:02:50.199 LIB libspdk_nvme.a 00:02:50.459 CC lib/event/app.o 00:02:50.459 CC lib/event/app_rpc.o 00:02:50.459 CC lib/event/reactor.o 00:02:50.459 CC lib/event/scheduler_static.o 00:02:50.459 CC lib/event/log_rpc.o 00:02:50.459 CC lib/bdev/bdev.o 00:02:50.459 CC lib/bdev/bdev_rpc.o 00:02:50.459 SO libspdk_nvme.so.14.0 00:02:50.459 CC lib/bdev/bdev_zone.o 00:02:50.459 CC lib/bdev/part.o 00:02:50.719 CC lib/bdev/scsi_nvme.o 00:02:50.719 LIB libspdk_fsdev.a 00:02:50.719 SO libspdk_fsdev.so.1.0 00:02:50.719 SYMLINK libspdk_nvme.so 00:02:50.719 LIB libspdk_event.a 00:02:50.719 SYMLINK libspdk_fsdev.so 00:02:50.979 SO libspdk_event.so.14.0 00:02:50.979 SYMLINK libspdk_event.so 00:02:51.239 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.808 LIB libspdk_fuse_dispatcher.a 00:02:51.808 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.067 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.635 LIB libspdk_blob.a 00:02:52.893 SO libspdk_blob.so.11.0 00:02:52.893 SYMLINK libspdk_blob.so 00:02:53.151 LIB libspdk_bdev.a 00:02:53.151 SO libspdk_bdev.so.16.0 00:02:53.151 CC lib/lvol/lvol.o 00:02:53.151 CC lib/blobfs/tree.o 00:02:53.151 CC lib/blobfs/blobfs.o 00:02:53.414 SYMLINK libspdk_bdev.so 00:02:53.673 CC lib/nvmf/ctrlr.o 00:02:53.673 CC lib/nvmf/ctrlr_discovery.o 00:02:53.673 CC lib/nvmf/subsystem.o 00:02:53.673 CC lib/nvmf/ctrlr_bdev.o 00:02:53.673 CC lib/ublk/ublk.o 00:02:53.673 CC lib/scsi/dev.o 00:02:53.673 CC lib/nbd/nbd.o 00:02:53.673 CC lib/ftl/ftl_core.o 00:02:53.931 CC lib/scsi/lun.o 00:02:53.931 CC lib/ftl/ftl_init.o 00:02:53.931 CC lib/nbd/nbd_rpc.o 00:02:54.190 CC lib/scsi/port.o 00:02:54.190 CC lib/scsi/scsi.o 00:02:54.190 LIB libspdk_blobfs.a 00:02:54.190 LIB libspdk_nbd.a 00:02:54.190 CC lib/ftl/ftl_layout.o 00:02:54.190 SO libspdk_blobfs.so.10.0 00:02:54.190 SO libspdk_nbd.so.7.0 00:02:54.190 CC lib/ublk/ublk_rpc.o 00:02:54.190 LIB libspdk_lvol.a 00:02:54.190 SO libspdk_lvol.so.10.0 00:02:54.190 CC lib/scsi/scsi_bdev.o 00:02:54.190 SYMLINK libspdk_blobfs.so 00:02:54.190 SYMLINK libspdk_nbd.so 00:02:54.190 CC lib/scsi/scsi_pr.o 00:02:54.190 CC lib/ftl/ftl_debug.o 00:02:54.190 CC lib/ftl/ftl_io.o 00:02:54.448 SYMLINK libspdk_lvol.so 00:02:54.448 CC lib/ftl/ftl_sb.o 00:02:54.448 CC lib/nvmf/nvmf.o 00:02:54.448 LIB libspdk_ublk.a 00:02:54.448 SO libspdk_ublk.so.3.0 00:02:54.448 CC lib/ftl/ftl_l2p.o 00:02:54.448 SYMLINK libspdk_ublk.so 00:02:54.448 CC lib/nvmf/nvmf_rpc.o 00:02:54.448 CC lib/ftl/ftl_l2p_flat.o 00:02:54.448 CC lib/ftl/ftl_nv_cache.o 00:02:54.448 CC lib/ftl/ftl_band.o 00:02:54.706 CC lib/scsi/scsi_rpc.o 00:02:54.706 CC lib/scsi/task.o 00:02:54.706 CC lib/ftl/ftl_band_ops.o 00:02:54.706 CC lib/ftl/ftl_writer.o 00:02:54.706 CC lib/ftl/ftl_rq.o 00:02:54.965 CC lib/nvmf/transport.o 00:02:54.965 LIB libspdk_scsi.a 00:02:54.965 SO libspdk_scsi.so.9.0 00:02:54.965 CC lib/ftl/ftl_reloc.o 00:02:54.965 CC lib/ftl/ftl_l2p_cache.o 00:02:54.965 SYMLINK libspdk_scsi.so 00:02:54.965 CC lib/nvmf/tcp.o 00:02:54.965 CC lib/nvmf/stubs.o 00:02:54.965 CC lib/ftl/ftl_p2l.o 00:02:55.225 CC lib/ftl/ftl_p2l_log.o 00:02:55.225 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.225 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.484 CC lib/nvmf/mdns_server.o 00:02:55.484 CC lib/nvmf/rdma.o 00:02:55.484 CC lib/iscsi/conn.o 00:02:55.484 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.484 CC lib/iscsi/init_grp.o 00:02:55.484 CC lib/iscsi/iscsi.o 00:02:55.484 CC lib/vhost/vhost.o 00:02:55.484 CC lib/iscsi/param.o 00:02:55.484 CC lib/iscsi/portal_grp.o 00:02:55.744 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.744 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.744 CC lib/vhost/vhost_rpc.o 00:02:55.744 CC lib/vhost/vhost_scsi.o 00:02:56.003 CC lib/vhost/vhost_blk.o 00:02:56.003 CC lib/vhost/rte_vhost_user.o 00:02:56.003 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.003 CC lib/iscsi/tgt_node.o 00:02:56.003 CC lib/iscsi/iscsi_subsystem.o 00:02:56.262 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.521 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.521 CC lib/iscsi/iscsi_rpc.o 00:02:56.521 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.521 CC lib/iscsi/task.o 00:02:56.521 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.521 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.781 CC lib/nvmf/auth.o 00:02:56.781 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.781 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.781 CC lib/ftl/utils/ftl_conf.o 00:02:56.781 CC lib/ftl/utils/ftl_md.o 00:02:56.781 CC lib/ftl/utils/ftl_mempool.o 00:02:57.040 CC lib/ftl/utils/ftl_bitmap.o 00:02:57.040 CC lib/ftl/utils/ftl_property.o 00:02:57.040 LIB libspdk_vhost.a 00:02:57.040 LIB libspdk_iscsi.a 00:02:57.040 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:57.040 SO libspdk_vhost.so.8.0 00:02:57.040 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:57.040 SO libspdk_iscsi.so.8.0 00:02:57.040 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.300 SYMLINK libspdk_vhost.so 00:02:57.300 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.300 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:57.300 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:57.300 SYMLINK libspdk_iscsi.so 00:02:57.300 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:57.300 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:57.300 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:57.300 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:57.300 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:57.300 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:57.300 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:57.300 CC lib/ftl/base/ftl_base_dev.o 00:02:57.559 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.559 CC lib/ftl/ftl_trace.o 00:02:57.818 LIB libspdk_ftl.a 00:02:57.818 LIB libspdk_nvmf.a 00:02:58.077 SO libspdk_ftl.so.9.0 00:02:58.077 SO libspdk_nvmf.so.19.0 00:02:58.335 SYMLINK libspdk_ftl.so 00:02:58.335 SYMLINK libspdk_nvmf.so 00:02:58.594 CC module/env_dpdk/env_dpdk_rpc.o 00:02:58.853 CC module/accel/ioat/accel_ioat.o 00:02:58.853 CC module/keyring/file/keyring.o 00:02:58.853 CC module/accel/error/accel_error.o 00:02:58.853 CC module/keyring/linux/keyring.o 00:02:58.853 CC module/fsdev/aio/fsdev_aio.o 00:02:58.853 CC module/accel/dsa/accel_dsa.o 00:02:58.853 CC module/sock/posix/posix.o 00:02:58.853 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:58.853 CC module/blob/bdev/blob_bdev.o 00:02:58.853 LIB libspdk_env_dpdk_rpc.a 00:02:58.853 SO libspdk_env_dpdk_rpc.so.6.0 00:02:58.853 SYMLINK libspdk_env_dpdk_rpc.so 00:02:58.853 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:58.853 CC module/keyring/file/keyring_rpc.o 00:02:58.853 CC module/keyring/linux/keyring_rpc.o 00:02:58.853 CC module/accel/error/accel_error_rpc.o 00:02:58.853 CC module/accel/ioat/accel_ioat_rpc.o 00:02:58.853 LIB libspdk_scheduler_dynamic.a 00:02:58.853 SO libspdk_scheduler_dynamic.so.4.0 00:02:59.113 LIB libspdk_keyring_file.a 00:02:59.113 SYMLINK libspdk_scheduler_dynamic.so 00:02:59.113 SO libspdk_keyring_file.so.2.0 00:02:59.113 LIB libspdk_blob_bdev.a 00:02:59.113 LIB libspdk_accel_ioat.a 00:02:59.113 LIB libspdk_accel_error.a 00:02:59.113 SO libspdk_blob_bdev.so.11.0 00:02:59.113 LIB libspdk_keyring_linux.a 00:02:59.113 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.113 SO libspdk_accel_ioat.so.6.0 00:02:59.113 SO libspdk_accel_error.so.2.0 00:02:59.113 SYMLINK libspdk_keyring_file.so 00:02:59.113 SO libspdk_keyring_linux.so.1.0 00:02:59.113 SYMLINK libspdk_blob_bdev.so 00:02:59.113 SYMLINK libspdk_accel_ioat.so 00:02:59.113 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.113 CC module/accel/iaa/accel_iaa.o 00:02:59.113 SYMLINK libspdk_accel_error.so 00:02:59.113 CC module/fsdev/aio/linux_aio_mgr.o 00:02:59.113 SYMLINK libspdk_keyring_linux.so 00:02:59.113 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.113 LIB libspdk_accel_dsa.a 00:02:59.372 SO libspdk_accel_dsa.so.5.0 00:02:59.372 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.372 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.372 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:59.372 SYMLINK libspdk_accel_dsa.so 00:02:59.372 LIB libspdk_accel_iaa.a 00:02:59.372 CC module/blobfs/bdev/blobfs_bdev.o 00:02:59.372 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:59.372 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:59.372 CC module/bdev/delay/vbdev_delay.o 00:02:59.372 SO libspdk_accel_iaa.so.3.0 00:02:59.372 LIB libspdk_scheduler_gscheduler.a 00:02:59.372 CC module/bdev/error/vbdev_error.o 00:02:59.372 SO libspdk_scheduler_gscheduler.so.4.0 00:02:59.372 LIB libspdk_fsdev_aio.a 00:02:59.372 SYMLINK libspdk_accel_iaa.so 00:02:59.372 CC module/bdev/error/vbdev_error_rpc.o 00:02:59.372 CC module/bdev/gpt/gpt.o 00:02:59.372 SO libspdk_fsdev_aio.so.1.0 00:02:59.632 SYMLINK libspdk_scheduler_gscheduler.so 00:02:59.632 CC module/bdev/gpt/vbdev_gpt.o 00:02:59.632 LIB libspdk_sock_posix.a 00:02:59.632 CC module/bdev/lvol/vbdev_lvol.o 00:02:59.632 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:59.632 SO libspdk_sock_posix.so.6.0 00:02:59.632 LIB libspdk_blobfs_bdev.a 00:02:59.632 SYMLINK libspdk_fsdev_aio.so 00:02:59.632 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:59.632 SO libspdk_blobfs_bdev.so.6.0 00:02:59.632 SYMLINK libspdk_sock_posix.so 00:02:59.632 SYMLINK libspdk_blobfs_bdev.so 00:02:59.632 LIB libspdk_bdev_error.a 00:02:59.892 SO libspdk_bdev_error.so.6.0 00:02:59.892 LIB libspdk_bdev_gpt.a 00:02:59.892 LIB libspdk_bdev_delay.a 00:02:59.892 SO libspdk_bdev_gpt.so.6.0 00:02:59.892 SO libspdk_bdev_delay.so.6.0 00:02:59.892 CC module/bdev/nvme/bdev_nvme.o 00:02:59.892 CC module/bdev/null/bdev_null.o 00:02:59.892 SYMLINK libspdk_bdev_error.so 00:02:59.892 CC module/bdev/malloc/bdev_malloc.o 00:02:59.892 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:59.892 CC module/bdev/passthru/vbdev_passthru.o 00:02:59.892 SYMLINK libspdk_bdev_gpt.so 00:02:59.892 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:59.892 CC module/bdev/raid/bdev_raid.o 00:02:59.892 SYMLINK libspdk_bdev_delay.so 00:02:59.892 CC module/bdev/raid/bdev_raid_rpc.o 00:02:59.893 CC module/bdev/nvme/nvme_rpc.o 00:03:00.153 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.153 LIB libspdk_bdev_lvol.a 00:03:00.153 CC module/bdev/null/bdev_null_rpc.o 00:03:00.153 CC module/bdev/nvme/vbdev_opal.o 00:03:00.153 SO libspdk_bdev_lvol.so.6.0 00:03:00.153 SYMLINK libspdk_bdev_lvol.so 00:03:00.153 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.153 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.153 LIB libspdk_bdev_passthru.a 00:03:00.153 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.153 SO libspdk_bdev_passthru.so.6.0 00:03:00.153 LIB libspdk_bdev_null.a 00:03:00.153 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.414 SO libspdk_bdev_null.so.6.0 00:03:00.414 SYMLINK libspdk_bdev_passthru.so 00:03:00.414 SYMLINK libspdk_bdev_null.so 00:03:00.414 CC module/bdev/raid/raid0.o 00:03:00.414 LIB libspdk_bdev_malloc.a 00:03:00.414 SO libspdk_bdev_malloc.so.6.0 00:03:00.414 CC module/bdev/raid/raid1.o 00:03:00.414 CC module/bdev/split/vbdev_split.o 00:03:00.414 SYMLINK libspdk_bdev_malloc.so 00:03:00.414 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.414 CC module/bdev/raid/concat.o 00:03:00.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.414 CC module/bdev/aio/bdev_aio.o 00:03:00.674 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.674 CC module/bdev/raid/raid5f.o 00:03:00.674 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.674 LIB libspdk_bdev_split.a 00:03:00.674 SO libspdk_bdev_split.so.6.0 00:03:00.674 SYMLINK libspdk_bdev_split.so 00:03:00.934 CC module/bdev/ftl/bdev_ftl.o 00:03:00.934 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.934 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.934 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.934 LIB libspdk_bdev_zone_block.a 00:03:00.934 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.934 LIB libspdk_bdev_aio.a 00:03:00.934 SO libspdk_bdev_zone_block.so.6.0 00:03:00.934 SO libspdk_bdev_aio.so.6.0 00:03:00.934 SYMLINK libspdk_bdev_zone_block.so 00:03:00.934 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.934 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.934 SYMLINK libspdk_bdev_aio.so 00:03:01.194 LIB libspdk_bdev_ftl.a 00:03:01.194 LIB libspdk_bdev_raid.a 00:03:01.194 SO libspdk_bdev_ftl.so.6.0 00:03:01.194 SYMLINK libspdk_bdev_ftl.so 00:03:01.194 SO libspdk_bdev_raid.so.6.0 00:03:01.194 LIB libspdk_bdev_iscsi.a 00:03:01.454 SYMLINK libspdk_bdev_raid.so 00:03:01.454 SO libspdk_bdev_iscsi.so.6.0 00:03:01.454 LIB libspdk_bdev_virtio.a 00:03:01.454 SYMLINK libspdk_bdev_iscsi.so 00:03:01.454 SO libspdk_bdev_virtio.so.6.0 00:03:01.454 SYMLINK libspdk_bdev_virtio.so 00:03:02.393 LIB libspdk_bdev_nvme.a 00:03:02.393 SO libspdk_bdev_nvme.so.7.0 00:03:02.653 SYMLINK libspdk_bdev_nvme.so 00:03:03.223 CC module/event/subsystems/fsdev/fsdev.o 00:03:03.223 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.223 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.223 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.223 CC module/event/subsystems/vmd/vmd.o 00:03:03.223 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.223 CC module/event/subsystems/keyring/keyring.o 00:03:03.223 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.223 CC module/event/subsystems/sock/sock.o 00:03:03.223 LIB libspdk_event_vhost_blk.a 00:03:03.223 LIB libspdk_event_keyring.a 00:03:03.223 LIB libspdk_event_scheduler.a 00:03:03.223 LIB libspdk_event_vmd.a 00:03:03.483 LIB libspdk_event_fsdev.a 00:03:03.483 LIB libspdk_event_iobuf.a 00:03:03.483 LIB libspdk_event_sock.a 00:03:03.483 SO libspdk_event_keyring.so.1.0 00:03:03.483 SO libspdk_event_vhost_blk.so.3.0 00:03:03.483 SO libspdk_event_scheduler.so.4.0 00:03:03.483 SO libspdk_event_vmd.so.6.0 00:03:03.483 SO libspdk_event_fsdev.so.1.0 00:03:03.483 SO libspdk_event_sock.so.5.0 00:03:03.483 SO libspdk_event_iobuf.so.3.0 00:03:03.483 SYMLINK libspdk_event_keyring.so 00:03:03.483 SYMLINK libspdk_event_scheduler.so 00:03:03.483 SYMLINK libspdk_event_vhost_blk.so 00:03:03.483 SYMLINK libspdk_event_vmd.so 00:03:03.483 SYMLINK libspdk_event_fsdev.so 00:03:03.483 SYMLINK libspdk_event_sock.so 00:03:03.483 SYMLINK libspdk_event_iobuf.so 00:03:03.742 CC module/event/subsystems/accel/accel.o 00:03:04.002 LIB libspdk_event_accel.a 00:03:04.002 SO libspdk_event_accel.so.6.0 00:03:04.002 SYMLINK libspdk_event_accel.so 00:03:04.572 CC module/event/subsystems/bdev/bdev.o 00:03:04.832 LIB libspdk_event_bdev.a 00:03:04.832 SO libspdk_event_bdev.so.6.0 00:03:04.832 SYMLINK libspdk_event_bdev.so 00:03:05.092 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.092 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.092 CC module/event/subsystems/nbd/nbd.o 00:03:05.092 CC module/event/subsystems/scsi/scsi.o 00:03:05.092 CC module/event/subsystems/ublk/ublk.o 00:03:05.351 LIB libspdk_event_ublk.a 00:03:05.351 LIB libspdk_event_nbd.a 00:03:05.351 LIB libspdk_event_scsi.a 00:03:05.351 SO libspdk_event_ublk.so.3.0 00:03:05.351 SO libspdk_event_nbd.so.6.0 00:03:05.351 SO libspdk_event_scsi.so.6.0 00:03:05.351 LIB libspdk_event_nvmf.a 00:03:05.351 SYMLINK libspdk_event_ublk.so 00:03:05.351 SO libspdk_event_nvmf.so.6.0 00:03:05.351 SYMLINK libspdk_event_scsi.so 00:03:05.351 SYMLINK libspdk_event_nbd.so 00:03:05.611 SYMLINK libspdk_event_nvmf.so 00:03:05.871 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.871 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.871 LIB libspdk_event_vhost_scsi.a 00:03:05.871 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.871 LIB libspdk_event_iscsi.a 00:03:06.131 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.131 SO libspdk_event_iscsi.so.6.0 00:03:06.131 SYMLINK libspdk_event_iscsi.so 00:03:06.391 SO libspdk.so.6.0 00:03:06.391 SYMLINK libspdk.so 00:03:06.650 TEST_HEADER include/spdk/accel.h 00:03:06.650 TEST_HEADER include/spdk/accel_module.h 00:03:06.650 CXX app/trace/trace.o 00:03:06.650 TEST_HEADER include/spdk/assert.h 00:03:06.650 CC app/trace_record/trace_record.o 00:03:06.650 TEST_HEADER include/spdk/barrier.h 00:03:06.650 TEST_HEADER include/spdk/base64.h 00:03:06.650 TEST_HEADER include/spdk/bdev.h 00:03:06.650 TEST_HEADER include/spdk/bdev_module.h 00:03:06.650 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.650 TEST_HEADER include/spdk/bit_array.h 00:03:06.650 TEST_HEADER include/spdk/bit_pool.h 00:03:06.650 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.650 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.650 TEST_HEADER include/spdk/blobfs.h 00:03:06.650 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.650 TEST_HEADER include/spdk/blob.h 00:03:06.650 TEST_HEADER include/spdk/conf.h 00:03:06.650 TEST_HEADER include/spdk/config.h 00:03:06.650 TEST_HEADER include/spdk/cpuset.h 00:03:06.650 TEST_HEADER include/spdk/crc16.h 00:03:06.650 TEST_HEADER include/spdk/crc32.h 00:03:06.650 TEST_HEADER include/spdk/crc64.h 00:03:06.650 TEST_HEADER include/spdk/dif.h 00:03:06.650 TEST_HEADER include/spdk/dma.h 00:03:06.650 TEST_HEADER include/spdk/endian.h 00:03:06.650 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.650 TEST_HEADER include/spdk/env.h 00:03:06.650 TEST_HEADER include/spdk/event.h 00:03:06.650 TEST_HEADER include/spdk/fd_group.h 00:03:06.650 TEST_HEADER include/spdk/fd.h 00:03:06.650 TEST_HEADER include/spdk/file.h 00:03:06.650 TEST_HEADER include/spdk/fsdev.h 00:03:06.650 TEST_HEADER include/spdk/fsdev_module.h 00:03:06.650 TEST_HEADER include/spdk/ftl.h 00:03:06.650 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:06.650 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.650 TEST_HEADER include/spdk/hexlify.h 00:03:06.650 TEST_HEADER include/spdk/histogram_data.h 00:03:06.650 CC examples/util/zipf/zipf.o 00:03:06.650 TEST_HEADER include/spdk/idxd.h 00:03:06.650 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.650 CC examples/ioat/perf/perf.o 00:03:06.650 TEST_HEADER include/spdk/init.h 00:03:06.650 TEST_HEADER include/spdk/ioat.h 00:03:06.650 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.650 CC test/thread/poller_perf/poller_perf.o 00:03:06.650 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.650 TEST_HEADER include/spdk/json.h 00:03:06.650 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.650 TEST_HEADER include/spdk/keyring.h 00:03:06.650 TEST_HEADER include/spdk/keyring_module.h 00:03:06.650 TEST_HEADER include/spdk/likely.h 00:03:06.650 TEST_HEADER include/spdk/log.h 00:03:06.650 TEST_HEADER include/spdk/lvol.h 00:03:06.650 TEST_HEADER include/spdk/md5.h 00:03:06.650 TEST_HEADER include/spdk/memory.h 00:03:06.650 TEST_HEADER include/spdk/mmio.h 00:03:06.650 TEST_HEADER include/spdk/nbd.h 00:03:06.650 TEST_HEADER include/spdk/net.h 00:03:06.650 CC test/dma/test_dma/test_dma.o 00:03:06.650 TEST_HEADER include/spdk/notify.h 00:03:06.650 TEST_HEADER include/spdk/nvme.h 00:03:06.650 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.650 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.650 CC test/app/bdev_svc/bdev_svc.o 00:03:06.650 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.650 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.650 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.910 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.910 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.910 TEST_HEADER include/spdk/nvmf.h 00:03:06.910 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.910 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.910 TEST_HEADER include/spdk/opal.h 00:03:06.910 TEST_HEADER include/spdk/opal_spec.h 00:03:06.910 TEST_HEADER include/spdk/pci_ids.h 00:03:06.910 TEST_HEADER include/spdk/pipe.h 00:03:06.910 TEST_HEADER include/spdk/queue.h 00:03:06.910 TEST_HEADER include/spdk/reduce.h 00:03:06.910 TEST_HEADER include/spdk/rpc.h 00:03:06.910 TEST_HEADER include/spdk/scheduler.h 00:03:06.910 TEST_HEADER include/spdk/scsi.h 00:03:06.910 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.910 TEST_HEADER include/spdk/sock.h 00:03:06.910 TEST_HEADER include/spdk/stdinc.h 00:03:06.910 TEST_HEADER include/spdk/string.h 00:03:06.910 TEST_HEADER include/spdk/thread.h 00:03:06.910 TEST_HEADER include/spdk/trace.h 00:03:06.910 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.910 TEST_HEADER include/spdk/trace_parser.h 00:03:06.910 TEST_HEADER include/spdk/tree.h 00:03:06.910 TEST_HEADER include/spdk/ublk.h 00:03:06.910 TEST_HEADER include/spdk/util.h 00:03:06.910 LINK interrupt_tgt 00:03:06.910 TEST_HEADER include/spdk/uuid.h 00:03:06.910 TEST_HEADER include/spdk/version.h 00:03:06.910 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.910 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.910 TEST_HEADER include/spdk/vhost.h 00:03:06.910 TEST_HEADER include/spdk/vmd.h 00:03:06.910 TEST_HEADER include/spdk/xor.h 00:03:06.910 TEST_HEADER include/spdk/zipf.h 00:03:06.910 CXX test/cpp_headers/accel.o 00:03:06.910 LINK zipf 00:03:06.911 LINK poller_perf 00:03:06.911 LINK spdk_trace_record 00:03:06.911 LINK bdev_svc 00:03:06.911 LINK ioat_perf 00:03:06.911 CXX test/cpp_headers/accel_module.o 00:03:07.170 LINK spdk_trace 00:03:07.170 CC test/env/vtophys/vtophys.o 00:03:07.170 CC app/nvmf_tgt/nvmf_main.o 00:03:07.170 CXX test/cpp_headers/assert.o 00:03:07.170 CC app/iscsi_tgt/iscsi_tgt.o 00:03:07.170 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:07.170 CXX test/cpp_headers/barrier.o 00:03:07.170 LINK test_dma 00:03:07.170 CC examples/ioat/verify/verify.o 00:03:07.430 LINK vtophys 00:03:07.430 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.430 LINK nvmf_tgt 00:03:07.430 LINK env_dpdk_post_init 00:03:07.430 CXX test/cpp_headers/base64.o 00:03:07.430 LINK mem_callbacks 00:03:07.430 LINK iscsi_tgt 00:03:07.430 CC test/app/histogram_perf/histogram_perf.o 00:03:07.430 LINK verify 00:03:07.430 CC test/app/jsoncat/jsoncat.o 00:03:07.430 CXX test/cpp_headers/bdev.o 00:03:07.430 CXX test/cpp_headers/bdev_module.o 00:03:07.690 LINK histogram_perf 00:03:07.690 CC test/app/stub/stub.o 00:03:07.690 CC test/env/memory/memory_ut.o 00:03:07.690 LINK jsoncat 00:03:07.690 CC app/spdk_tgt/spdk_tgt.o 00:03:07.690 CC app/spdk_lspci/spdk_lspci.o 00:03:07.690 CXX test/cpp_headers/bdev_zone.o 00:03:07.690 CXX test/cpp_headers/bit_array.o 00:03:07.690 LINK stub 00:03:07.690 LINK nvme_fuzz 00:03:07.690 CC test/env/pci/pci_ut.o 00:03:07.951 CXX test/cpp_headers/bit_pool.o 00:03:07.951 LINK spdk_lspci 00:03:07.951 CC examples/thread/thread/thread_ex.o 00:03:07.951 CXX test/cpp_headers/blob_bdev.o 00:03:07.951 CXX test/cpp_headers/blobfs_bdev.o 00:03:07.951 LINK spdk_tgt 00:03:07.951 CC test/rpc_client/rpc_client_test.o 00:03:07.951 CXX test/cpp_headers/blobfs.o 00:03:07.951 CXX test/cpp_headers/blob.o 00:03:07.951 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.210 LINK thread 00:03:08.210 CXX test/cpp_headers/conf.o 00:03:08.210 LINK rpc_client_test 00:03:08.210 LINK pci_ut 00:03:08.210 CC app/spdk_nvme_perf/perf.o 00:03:08.210 CC examples/sock/hello_world/hello_sock.o 00:03:08.210 CC app/spdk_nvme_identify/identify.o 00:03:08.210 CC examples/vmd/lsvmd/lsvmd.o 00:03:08.210 CXX test/cpp_headers/config.o 00:03:08.210 CXX test/cpp_headers/cpuset.o 00:03:08.469 CC examples/vmd/led/led.o 00:03:08.469 LINK lsvmd 00:03:08.469 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:08.469 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:08.469 CXX test/cpp_headers/crc16.o 00:03:08.469 LINK hello_sock 00:03:08.469 LINK led 00:03:08.469 CXX test/cpp_headers/crc32.o 00:03:08.729 CXX test/cpp_headers/crc64.o 00:03:08.729 CC examples/idxd/perf/perf.o 00:03:08.729 CC test/accel/dif/dif.o 00:03:08.729 LINK memory_ut 00:03:08.729 CC test/blobfs/mkfs/mkfs.o 00:03:09.007 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:09.007 LINK vhost_fuzz 00:03:09.007 CXX test/cpp_headers/dif.o 00:03:09.007 LINK mkfs 00:03:09.007 CXX test/cpp_headers/dma.o 00:03:09.007 LINK idxd_perf 00:03:09.007 LINK spdk_nvme_perf 00:03:09.007 LINK hello_fsdev 00:03:09.305 CC examples/accel/perf/accel_perf.o 00:03:09.305 CC test/event/event_perf/event_perf.o 00:03:09.305 CXX test/cpp_headers/endian.o 00:03:09.305 LINK spdk_nvme_identify 00:03:09.305 CC app/spdk_nvme_discover/discovery_aer.o 00:03:09.305 CXX test/cpp_headers/env_dpdk.o 00:03:09.305 CXX test/cpp_headers/env.o 00:03:09.305 LINK event_perf 00:03:09.305 CXX test/cpp_headers/event.o 00:03:09.565 CXX test/cpp_headers/fd_group.o 00:03:09.565 CC test/event/reactor/reactor.o 00:03:09.565 CXX test/cpp_headers/fd.o 00:03:09.565 CXX test/cpp_headers/file.o 00:03:09.565 CC app/spdk_top/spdk_top.o 00:03:09.565 LINK dif 00:03:09.565 LINK spdk_nvme_discover 00:03:09.565 LINK reactor 00:03:09.565 CXX test/cpp_headers/fsdev.o 00:03:09.825 LINK accel_perf 00:03:09.825 CC test/event/reactor_perf/reactor_perf.o 00:03:09.825 CC test/event/app_repeat/app_repeat.o 00:03:09.825 CXX test/cpp_headers/fsdev_module.o 00:03:09.825 CC test/lvol/esnap/esnap.o 00:03:09.825 CC test/event/scheduler/scheduler.o 00:03:09.825 LINK iscsi_fuzz 00:03:09.825 LINK reactor_perf 00:03:09.825 CC test/nvme/aer/aer.o 00:03:09.825 LINK app_repeat 00:03:10.085 CC test/bdev/bdevio/bdevio.o 00:03:10.085 CXX test/cpp_headers/ftl.o 00:03:10.085 CXX test/cpp_headers/fuse_dispatcher.o 00:03:10.085 LINK scheduler 00:03:10.085 CC examples/blob/hello_world/hello_blob.o 00:03:10.085 LINK aer 00:03:10.085 CC examples/nvme/hello_world/hello_world.o 00:03:10.345 CXX test/cpp_headers/gpt_spec.o 00:03:10.345 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.345 LINK hello_blob 00:03:10.345 CC app/vhost/vhost.o 00:03:10.345 LINK bdevio 00:03:10.345 CC app/spdk_dd/spdk_dd.o 00:03:10.345 CXX test/cpp_headers/hexlify.o 00:03:10.345 LINK hello_world 00:03:10.345 CC test/nvme/reset/reset.o 00:03:10.605 LINK vhost 00:03:10.605 LINK hello_bdev 00:03:10.605 LINK spdk_top 00:03:10.605 CXX test/cpp_headers/histogram_data.o 00:03:10.605 CC examples/blob/cli/blobcli.o 00:03:10.605 CC app/fio/nvme/fio_plugin.o 00:03:10.605 CC examples/nvme/reconnect/reconnect.o 00:03:10.605 CXX test/cpp_headers/idxd.o 00:03:10.605 LINK reset 00:03:10.864 LINK spdk_dd 00:03:10.864 CC test/nvme/sgl/sgl.o 00:03:10.864 CC app/fio/bdev/fio_plugin.o 00:03:10.864 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.864 CXX test/cpp_headers/idxd_spec.o 00:03:10.864 CC test/nvme/e2edp/nvme_dp.o 00:03:11.124 CXX test/cpp_headers/init.o 00:03:11.124 LINK reconnect 00:03:11.124 CC test/nvme/overhead/overhead.o 00:03:11.124 LINK sgl 00:03:11.124 CXX test/cpp_headers/ioat.o 00:03:11.124 LINK blobcli 00:03:11.124 LINK spdk_nvme 00:03:11.124 LINK nvme_dp 00:03:11.383 LINK spdk_bdev 00:03:11.383 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:11.383 CXX test/cpp_headers/ioat_spec.o 00:03:11.383 CC test/nvme/err_injection/err_injection.o 00:03:11.383 LINK overhead 00:03:11.383 CC examples/nvme/arbitration/arbitration.o 00:03:11.383 CC examples/nvme/hotplug/hotplug.o 00:03:11.383 CC test/nvme/startup/startup.o 00:03:11.383 CXX test/cpp_headers/iscsi_spec.o 00:03:11.383 CC test/nvme/reserve/reserve.o 00:03:11.643 LINK err_injection 00:03:11.643 CC test/nvme/simple_copy/simple_copy.o 00:03:11.643 LINK bdevperf 00:03:11.643 CXX test/cpp_headers/json.o 00:03:11.643 LINK startup 00:03:11.643 LINK hotplug 00:03:11.643 LINK reserve 00:03:11.643 LINK arbitration 00:03:11.901 CXX test/cpp_headers/jsonrpc.o 00:03:11.901 CC test/nvme/connect_stress/connect_stress.o 00:03:11.901 LINK nvme_manage 00:03:11.901 LINK simple_copy 00:03:11.901 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.901 CC test/nvme/compliance/nvme_compliance.o 00:03:11.901 CC test/nvme/boot_partition/boot_partition.o 00:03:11.901 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.901 CXX test/cpp_headers/keyring.o 00:03:11.901 CC examples/nvme/abort/abort.o 00:03:11.901 LINK connect_stress 00:03:12.160 CXX test/cpp_headers/keyring_module.o 00:03:12.160 LINK boot_partition 00:03:12.160 LINK cmb_copy 00:03:12.160 CXX test/cpp_headers/likely.o 00:03:12.160 LINK fused_ordering 00:03:12.160 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.160 CXX test/cpp_headers/log.o 00:03:12.160 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:12.160 LINK nvme_compliance 00:03:12.160 CXX test/cpp_headers/lvol.o 00:03:12.160 CXX test/cpp_headers/md5.o 00:03:12.420 LINK pmr_persistence 00:03:12.420 CC test/nvme/fdp/fdp.o 00:03:12.420 CXX test/cpp_headers/memory.o 00:03:12.420 LINK abort 00:03:12.420 CC test/nvme/cuse/cuse.o 00:03:12.420 LINK doorbell_aers 00:03:12.420 CXX test/cpp_headers/mmio.o 00:03:12.420 CXX test/cpp_headers/nbd.o 00:03:12.420 CXX test/cpp_headers/net.o 00:03:12.420 CXX test/cpp_headers/notify.o 00:03:12.420 CXX test/cpp_headers/nvme.o 00:03:12.420 CXX test/cpp_headers/nvme_intel.o 00:03:12.679 CXX test/cpp_headers/nvme_ocssd.o 00:03:12.679 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.679 CXX test/cpp_headers/nvme_spec.o 00:03:12.679 CXX test/cpp_headers/nvme_zns.o 00:03:12.679 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.679 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.679 CC examples/nvmf/nvmf/nvmf.o 00:03:12.679 LINK fdp 00:03:12.679 CXX test/cpp_headers/nvmf.o 00:03:12.679 CXX test/cpp_headers/nvmf_spec.o 00:03:12.679 CXX test/cpp_headers/nvmf_transport.o 00:03:12.679 CXX test/cpp_headers/opal.o 00:03:12.939 CXX test/cpp_headers/opal_spec.o 00:03:12.939 CXX test/cpp_headers/pci_ids.o 00:03:12.939 CXX test/cpp_headers/pipe.o 00:03:12.939 CXX test/cpp_headers/queue.o 00:03:12.939 CXX test/cpp_headers/reduce.o 00:03:12.939 LINK nvmf 00:03:12.939 CXX test/cpp_headers/rpc.o 00:03:12.939 CXX test/cpp_headers/scheduler.o 00:03:12.939 CXX test/cpp_headers/scsi.o 00:03:12.939 CXX test/cpp_headers/scsi_spec.o 00:03:12.939 CXX test/cpp_headers/sock.o 00:03:12.939 CXX test/cpp_headers/stdinc.o 00:03:13.198 CXX test/cpp_headers/string.o 00:03:13.198 CXX test/cpp_headers/thread.o 00:03:13.198 CXX test/cpp_headers/trace.o 00:03:13.198 CXX test/cpp_headers/trace_parser.o 00:03:13.198 CXX test/cpp_headers/tree.o 00:03:13.198 CXX test/cpp_headers/ublk.o 00:03:13.198 CXX test/cpp_headers/util.o 00:03:13.198 CXX test/cpp_headers/uuid.o 00:03:13.198 CXX test/cpp_headers/version.o 00:03:13.198 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.198 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.198 CXX test/cpp_headers/vhost.o 00:03:13.198 CXX test/cpp_headers/vmd.o 00:03:13.198 CXX test/cpp_headers/xor.o 00:03:13.198 CXX test/cpp_headers/zipf.o 00:03:13.768 LINK cuse 00:03:15.675 LINK esnap 00:03:16.245 00:03:16.245 real 1m19.083s 00:03:16.245 user 6m50.987s 00:03:16.245 sys 1m38.974s 00:03:16.245 08:40:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.245 08:40:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.245 ************************************ 00:03:16.245 END TEST make 00:03:16.245 ************************************ 00:03:16.245 08:40:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.245 08:40:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.245 08:40:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.245 08:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.245 08:40:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.245 08:40:54 -- pm/common@44 -- $ pid=5450 00:03:16.245 08:40:54 -- pm/common@50 -- $ kill -TERM 5450 00:03:16.245 08:40:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.245 08:40:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.245 08:40:54 -- pm/common@44 -- $ pid=5451 00:03:16.245 08:40:54 -- pm/common@50 -- $ kill -TERM 5451 00:03:16.505 08:40:54 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.505 08:40:54 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.505 08:40:54 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.505 08:40:54 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.505 08:40:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.505 08:40:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.505 08:40:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.505 08:40:54 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.505 08:40:54 -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.505 08:40:54 -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.505 08:40:54 -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.505 08:40:54 -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.505 08:40:54 -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.505 08:40:54 -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.505 08:40:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.505 08:40:54 -- scripts/common.sh@344 -- # case "$op" in 00:03:16.505 08:40:54 -- scripts/common.sh@345 -- # : 1 00:03:16.505 08:40:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.505 08:40:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.505 08:40:54 -- scripts/common.sh@365 -- # decimal 1 00:03:16.505 08:40:54 -- scripts/common.sh@353 -- # local d=1 00:03:16.505 08:40:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.505 08:40:54 -- scripts/common.sh@355 -- # echo 1 00:03:16.505 08:40:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.505 08:40:54 -- scripts/common.sh@366 -- # decimal 2 00:03:16.505 08:40:54 -- scripts/common.sh@353 -- # local d=2 00:03:16.505 08:40:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.505 08:40:54 -- scripts/common.sh@355 -- # echo 2 00:03:16.505 08:40:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.505 08:40:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.505 08:40:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.505 08:40:54 -- scripts/common.sh@368 -- # return 0 00:03:16.505 08:40:54 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.505 08:40:54 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.505 --rc genhtml_branch_coverage=1 00:03:16.505 --rc genhtml_function_coverage=1 00:03:16.505 --rc genhtml_legend=1 00:03:16.505 --rc geninfo_all_blocks=1 00:03:16.505 --rc geninfo_unexecuted_blocks=1 00:03:16.505 00:03:16.505 ' 00:03:16.506 08:40:54 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.506 --rc genhtml_branch_coverage=1 00:03:16.506 --rc genhtml_function_coverage=1 00:03:16.506 --rc genhtml_legend=1 00:03:16.506 --rc geninfo_all_blocks=1 00:03:16.506 --rc geninfo_unexecuted_blocks=1 00:03:16.506 00:03:16.506 ' 00:03:16.506 08:40:54 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.506 --rc genhtml_branch_coverage=1 00:03:16.506 --rc genhtml_function_coverage=1 00:03:16.506 --rc genhtml_legend=1 00:03:16.506 --rc geninfo_all_blocks=1 00:03:16.506 --rc geninfo_unexecuted_blocks=1 00:03:16.506 00:03:16.506 ' 00:03:16.506 08:40:54 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.506 --rc genhtml_branch_coverage=1 00:03:16.506 --rc genhtml_function_coverage=1 00:03:16.506 --rc genhtml_legend=1 00:03:16.506 --rc geninfo_all_blocks=1 00:03:16.506 --rc geninfo_unexecuted_blocks=1 00:03:16.506 00:03:16.506 ' 00:03:16.506 08:40:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:16.506 08:40:54 -- nvmf/common.sh@7 -- # uname -s 00:03:16.506 08:40:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.506 08:40:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.506 08:40:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.506 08:40:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.506 08:40:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.506 08:40:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.506 08:40:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.506 08:40:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.506 08:40:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.506 08:40:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.506 08:40:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b450651-4bf1-412f-b307-e5438f919ee2 00:03:16.506 08:40:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=7b450651-4bf1-412f-b307-e5438f919ee2 00:03:16.506 08:40:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.506 08:40:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.506 08:40:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:16.506 08:40:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.506 08:40:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:16.506 08:40:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.506 08:40:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.506 08:40:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.506 08:40:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.506 08:40:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.506 08:40:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.506 08:40:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.506 08:40:54 -- paths/export.sh@5 -- # export PATH 00:03:16.506 08:40:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.506 08:40:54 -- nvmf/common.sh@51 -- # : 0 00:03:16.506 08:40:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.506 08:40:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.506 08:40:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.506 08:40:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.506 08:40:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.506 08:40:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.506 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.506 08:40:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.506 08:40:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.506 08:40:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.506 08:40:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.506 08:40:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.506 08:40:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.506 08:40:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.506 08:40:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.506 08:40:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.506 08:40:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.506 08:40:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.506 08:40:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.506 08:40:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.506 08:40:54 -- spdk/autotest.sh@48 -- # udevadm_pid=54372 00:03:16.506 08:40:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.506 08:40:54 -- pm/common@17 -- # local monitor 00:03:16.506 08:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.506 08:40:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.506 08:40:54 -- pm/common@25 -- # sleep 1 00:03:16.506 08:40:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.506 08:40:54 -- pm/common@21 -- # date +%s 00:03:16.506 08:40:54 -- pm/common@21 -- # date +%s 00:03:16.506 08:40:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727512854 00:03:16.766 08:40:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727512854 00:03:16.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727512854_collect-cpu-load.pm.log 00:03:16.766 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727512854_collect-vmstat.pm.log 00:03:17.703 08:40:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.703 08:40:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.703 08:40:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.703 08:40:55 -- common/autotest_common.sh@10 -- # set +x 00:03:17.703 08:40:55 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.703 08:40:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:17.703 08:40:55 -- common/autotest_common.sh@10 -- # set +x 00:03:17.703 08:40:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:17.703 08:40:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:17.703 08:40:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:17.703 08:40:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:17.703 08:40:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:17.703 08:40:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.703 08:40:55 -- common/autotest_common.sh@1455 -- # uname 00:03:17.703 08:40:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.703 08:40:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.703 08:40:55 -- common/autotest_common.sh@1475 -- # uname 00:03:17.703 08:40:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.703 08:40:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.703 08:40:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:17.703 lcov: LCOV version 1.15 00:03:17.704 08:40:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:32.615 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.511 08:41:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.511 08:41:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.511 08:41:23 -- common/autotest_common.sh@10 -- # set +x 00:03:47.511 08:41:23 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.511 08:41:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.511 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:47.511 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:47.511 08:41:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:47.511 08:41:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:47.511 08:41:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:47.511 08:41:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:47.511 08:41:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.511 08:41:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:47.511 08:41:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:47.511 08:41:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.511 08:41:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:03:47.511 08:41:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:03:47.511 08:41:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.511 08:41:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:03:47.511 08:41:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:03:47.511 08:41:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.511 08:41:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:47.511 08:41:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:47.511 08:41:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.511 08:41:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.511 08:41:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:47.511 08:41:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.511 08:41:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.511 08:41:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:47.511 08:41:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:47.511 08:41:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.511 No valid GPT data, bailing 00:03:47.511 08:41:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.511 08:41:24 -- scripts/common.sh@394 -- # pt= 00:03:47.511 08:41:24 -- scripts/common.sh@395 -- # return 1 00:03:47.511 08:41:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.511 1+0 records in 00:03:47.511 1+0 records out 00:03:47.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050575 s, 207 MB/s 00:03:47.511 08:41:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.511 08:41:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.511 08:41:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:03:47.511 08:41:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:03:47.511 08:41:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:03:47.511 No valid GPT data, bailing 00:03:47.511 08:41:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:47.511 08:41:24 -- scripts/common.sh@394 -- # pt= 00:03:47.511 08:41:24 -- scripts/common.sh@395 -- # return 1 00:03:47.511 08:41:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:03:47.511 1+0 records in 00:03:47.511 1+0 records out 00:03:47.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00701128 s, 150 MB/s 00:03:47.511 08:41:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.511 08:41:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.511 08:41:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:03:47.511 08:41:25 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:03:47.511 08:41:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:03:47.511 No valid GPT data, bailing 00:03:47.511 08:41:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:47.511 08:41:25 -- scripts/common.sh@394 -- # pt= 00:03:47.511 08:41:25 -- scripts/common.sh@395 -- # return 1 00:03:47.511 08:41:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:03:47.511 1+0 records in 00:03:47.511 1+0 records out 00:03:47.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066135 s, 159 MB/s 00:03:47.511 08:41:25 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.511 08:41:25 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.511 08:41:25 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:47.511 08:41:25 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:47.511 08:41:25 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.511 No valid GPT data, bailing 00:03:47.511 08:41:25 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.511 08:41:25 -- scripts/common.sh@394 -- # pt= 00:03:47.511 08:41:25 -- scripts/common.sh@395 -- # return 1 00:03:47.511 08:41:25 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.511 1+0 records in 00:03:47.511 1+0 records out 00:03:47.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653739 s, 160 MB/s 00:03:47.511 08:41:25 -- spdk/autotest.sh@105 -- # sync 00:03:47.511 08:41:25 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.511 08:41:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.511 08:41:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.052 08:41:27 -- spdk/autotest.sh@111 -- # uname -s 00:03:50.052 08:41:27 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:50.052 08:41:27 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:50.052 08:41:27 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.992 Hugepages 00:03:50.992 node hugesize free / total 00:03:50.992 node0 1048576kB 0 / 0 00:03:50.992 node0 2048kB 0 / 0 00:03:50.992 00:03:50.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:50.992 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:51.251 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:51.251 08:41:29 -- spdk/autotest.sh@117 -- # uname -s 00:03:51.251 08:41:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:51.251 08:41:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:51.251 08:41:29 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.188 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.188 08:41:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:53.568 08:41:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:53.568 08:41:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:53.568 08:41:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.568 08:41:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:53.568 08:41:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:53.568 08:41:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:53.568 08:41:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.568 08:41:31 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.568 08:41:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:53.568 08:41:31 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:53.568 08:41:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.568 08:41:31 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.828 Waiting for block devices as requested 00:03:54.087 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.087 08:41:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.087 08:41:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:54.087 08:41:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:54.088 08:41:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:54.088 08:41:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:54.088 08:41:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.088 08:41:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.088 08:41:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.088 08:41:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.088 08:41:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.088 08:41:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:54.088 08:41:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.088 08:41:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.361 08:41:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.361 08:41:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.361 08:41:32 -- common/autotest_common.sh@1541 -- # continue 00:03:54.361 08:41:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.361 08:41:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:54.361 08:41:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:54.361 08:41:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:54.361 08:41:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:54.361 08:41:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.361 08:41:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.361 08:41:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.361 08:41:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.361 08:41:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.361 08:41:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:54.361 08:41:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.361 08:41:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.361 08:41:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.361 08:41:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.361 08:41:32 -- common/autotest_common.sh@1541 -- # continue 00:03:54.361 08:41:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:54.361 08:41:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:54.361 08:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:54.361 08:41:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:54.361 08:41:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.361 08:41:32 -- common/autotest_common.sh@10 -- # set +x 00:03:54.361 08:41:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.319 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.319 08:41:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.319 08:41:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.319 08:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.319 08:41:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.319 08:41:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:55.578 08:41:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.578 08:41:33 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:55.578 08:41:33 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:55.578 08:41:33 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:55.578 08:41:33 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.578 08:41:33 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:55.578 08:41:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:55.578 08:41:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:55.578 08:41:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.578 08:41:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.578 08:41:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.578 08:41:33 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:55.578 08:41:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:55.579 08:41:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.579 08:41:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:55.579 08:41:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.579 08:41:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.579 08:41:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.579 08:41:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:55.579 08:41:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.579 08:41:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.579 08:41:33 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:55.579 08:41:33 -- common/autotest_common.sh@1570 -- # return 0 00:03:55.579 08:41:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:55.579 08:41:33 -- common/autotest_common.sh@1578 -- # return 0 00:03:55.579 08:41:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:55.579 08:41:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:55.579 08:41:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.579 08:41:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.579 08:41:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:55.579 08:41:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.579 08:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.579 08:41:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:55.579 08:41:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.579 08:41:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.579 08:41:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.579 08:41:33 -- common/autotest_common.sh@10 -- # set +x 00:03:55.579 ************************************ 00:03:55.579 START TEST env 00:03:55.579 ************************************ 00:03:55.579 08:41:33 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.579 * Looking for test storage... 00:03:55.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:55.839 08:41:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.839 08:41:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.839 08:41:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.839 08:41:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.839 08:41:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.839 08:41:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.839 08:41:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.839 08:41:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.839 08:41:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.839 08:41:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.839 08:41:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.839 08:41:33 env -- scripts/common.sh@344 -- # case "$op" in 00:03:55.839 08:41:33 env -- scripts/common.sh@345 -- # : 1 00:03:55.839 08:41:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.839 08:41:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.839 08:41:33 env -- scripts/common.sh@365 -- # decimal 1 00:03:55.839 08:41:33 env -- scripts/common.sh@353 -- # local d=1 00:03:55.839 08:41:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.839 08:41:33 env -- scripts/common.sh@355 -- # echo 1 00:03:55.839 08:41:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.839 08:41:33 env -- scripts/common.sh@366 -- # decimal 2 00:03:55.839 08:41:33 env -- scripts/common.sh@353 -- # local d=2 00:03:55.839 08:41:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.839 08:41:33 env -- scripts/common.sh@355 -- # echo 2 00:03:55.839 08:41:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.839 08:41:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.839 08:41:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.839 08:41:33 env -- scripts/common.sh@368 -- # return 0 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:55.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.839 --rc genhtml_branch_coverage=1 00:03:55.839 --rc genhtml_function_coverage=1 00:03:55.839 --rc genhtml_legend=1 00:03:55.839 --rc geninfo_all_blocks=1 00:03:55.839 --rc geninfo_unexecuted_blocks=1 00:03:55.839 00:03:55.839 ' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:55.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.839 --rc genhtml_branch_coverage=1 00:03:55.839 --rc genhtml_function_coverage=1 00:03:55.839 --rc genhtml_legend=1 00:03:55.839 --rc geninfo_all_blocks=1 00:03:55.839 --rc geninfo_unexecuted_blocks=1 00:03:55.839 00:03:55.839 ' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:55.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.839 --rc genhtml_branch_coverage=1 00:03:55.839 --rc genhtml_function_coverage=1 00:03:55.839 --rc genhtml_legend=1 00:03:55.839 --rc geninfo_all_blocks=1 00:03:55.839 --rc geninfo_unexecuted_blocks=1 00:03:55.839 00:03:55.839 ' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:55.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.839 --rc genhtml_branch_coverage=1 00:03:55.839 --rc genhtml_function_coverage=1 00:03:55.839 --rc genhtml_legend=1 00:03:55.839 --rc geninfo_all_blocks=1 00:03:55.839 --rc geninfo_unexecuted_blocks=1 00:03:55.839 00:03:55.839 ' 00:03:55.839 08:41:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.839 08:41:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.839 08:41:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.839 ************************************ 00:03:55.839 START TEST env_memory 00:03:55.839 ************************************ 00:03:55.839 08:41:33 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:55.839 00:03:55.839 00:03:55.839 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.839 http://cunit.sourceforge.net/ 00:03:55.839 00:03:55.839 00:03:55.839 Suite: memory 00:03:55.839 Test: alloc and free memory map ...[2024-09-28 08:41:33.745077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.839 passed 00:03:55.839 Test: mem map translation ...[2024-09-28 08:41:33.788477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.839 [2024-09-28 08:41:33.788520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.839 [2024-09-28 08:41:33.788580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.839 [2024-09-28 08:41:33.788605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.098 passed 00:03:56.098 Test: mem map registration ...[2024-09-28 08:41:33.852415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.099 [2024-09-28 08:41:33.852453] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.099 passed 00:03:56.099 Test: mem map adjacent registrations ...passed 00:03:56.099 00:03:56.099 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.099 suites 1 1 n/a 0 0 00:03:56.099 tests 4 4 4 0 0 00:03:56.099 asserts 152 152 152 0 n/a 00:03:56.099 00:03:56.099 Elapsed time = 0.234 seconds 00:03:56.099 00:03:56.099 real 0m0.277s 00:03:56.099 user 0m0.235s 00:03:56.099 sys 0m0.031s 00:03:56.099 08:41:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.099 08:41:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.099 ************************************ 00:03:56.099 END TEST env_memory 00:03:56.099 ************************************ 00:03:56.099 08:41:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.099 08:41:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.099 08:41:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.099 08:41:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.099 ************************************ 00:03:56.099 START TEST env_vtophys 00:03:56.099 ************************************ 00:03:56.099 08:41:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.099 EAL: lib.eal log level changed from notice to debug 00:03:56.099 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 1 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 2 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 3 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 4 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 5 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 6 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 7 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 8 as core 0 on socket 0 00:03:56.099 EAL: Detected lcore 9 as core 0 on socket 0 00:03:56.099 EAL: Maximum logical cores by configuration: 128 00:03:56.099 EAL: Detected CPU lcores: 10 00:03:56.099 EAL: Detected NUMA nodes: 1 00:03:56.099 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.099 EAL: Detected shared linkage of DPDK 00:03:56.358 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.358 EAL: Selected IOVA mode 'PA' 00:03:56.358 EAL: Probing VFIO support... 00:03:56.358 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.358 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:56.358 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.358 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.358 EAL: Setting up physically contiguous memory... 00:03:56.358 EAL: Setting maximum number of open files to 524288 00:03:56.358 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.358 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.358 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.358 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.358 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.358 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.358 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.358 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.358 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.358 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.358 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.358 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.358 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.358 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.358 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.358 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.358 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.358 EAL: Hugepages will be freed exactly as allocated. 00:03:56.358 EAL: No shared files mode enabled, IPC is disabled 00:03:56.358 EAL: No shared files mode enabled, IPC is disabled 00:03:56.358 EAL: TSC frequency is ~2290000 KHz 00:03:56.358 EAL: Main lcore 0 is ready (tid=7f83d581ba40;cpuset=[0]) 00:03:56.358 EAL: Trying to obtain current memory policy. 00:03:56.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.358 EAL: Restoring previous memory policy: 0 00:03:56.358 EAL: request: mp_malloc_sync 00:03:56.359 EAL: No shared files mode enabled, IPC is disabled 00:03:56.359 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.359 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.359 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.359 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:56.359 00:03:56.359 00:03:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.359 http://cunit.sourceforge.net/ 00:03:56.359 00:03:56.359 00:03:56.359 Suite: components_suite 00:03:56.618 Test: vtophys_malloc_test ...passed 00:03:56.618 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.619 EAL: Restoring previous memory policy: 4 00:03:56.619 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.619 EAL: request: mp_malloc_sync 00:03:56.619 EAL: No shared files mode enabled, IPC is disabled 00:03:56.619 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.619 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.619 EAL: request: mp_malloc_sync 00:03:56.619 EAL: No shared files mode enabled, IPC is disabled 00:03:56.619 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.619 EAL: Trying to obtain current memory policy. 00:03:56.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.619 EAL: Restoring previous memory policy: 4 00:03:56.619 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.619 EAL: request: mp_malloc_sync 00:03:56.619 EAL: No shared files mode enabled, IPC is disabled 00:03:56.619 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.878 EAL: Trying to obtain current memory policy. 00:03:56.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.878 EAL: Restoring previous memory policy: 4 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.878 EAL: Trying to obtain current memory policy. 00:03:56.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.878 EAL: Restoring previous memory policy: 4 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.878 EAL: Trying to obtain current memory policy. 00:03:56.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.878 EAL: Restoring previous memory policy: 4 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.878 EAL: request: mp_malloc_sync 00:03:56.878 EAL: No shared files mode enabled, IPC is disabled 00:03:56.878 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.878 EAL: Trying to obtain current memory policy. 00:03:56.879 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.879 EAL: Restoring previous memory policy: 4 00:03:56.879 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.879 EAL: request: mp_malloc_sync 00:03:56.879 EAL: No shared files mode enabled, IPC is disabled 00:03:56.879 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.138 EAL: request: mp_malloc_sync 00:03:57.138 EAL: No shared files mode enabled, IPC is disabled 00:03:57.138 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.138 EAL: Trying to obtain current memory policy. 00:03:57.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.138 EAL: Restoring previous memory policy: 4 00:03:57.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.138 EAL: request: mp_malloc_sync 00:03:57.138 EAL: No shared files mode enabled, IPC is disabled 00:03:57.138 EAL: Heap on socket 0 was expanded by 130MB 00:03:57.398 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.398 EAL: request: mp_malloc_sync 00:03:57.398 EAL: No shared files mode enabled, IPC is disabled 00:03:57.398 EAL: Heap on socket 0 was shrunk by 130MB 00:03:57.657 EAL: Trying to obtain current memory policy. 00:03:57.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.657 EAL: Restoring previous memory policy: 4 00:03:57.657 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.657 EAL: request: mp_malloc_sync 00:03:57.657 EAL: No shared files mode enabled, IPC is disabled 00:03:57.657 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.225 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.225 EAL: request: mp_malloc_sync 00:03:58.225 EAL: No shared files mode enabled, IPC is disabled 00:03:58.225 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.794 EAL: Trying to obtain current memory policy. 00:03:58.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.794 EAL: Restoring previous memory policy: 4 00:03:58.794 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.794 EAL: request: mp_malloc_sync 00:03:58.794 EAL: No shared files mode enabled, IPC is disabled 00:03:58.794 EAL: Heap on socket 0 was expanded by 514MB 00:03:59.734 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.734 EAL: request: mp_malloc_sync 00:03:59.734 EAL: No shared files mode enabled, IPC is disabled 00:03:59.734 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.673 EAL: Trying to obtain current memory policy. 00:04:00.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.673 EAL: Restoring previous memory policy: 4 00:04:00.673 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.673 EAL: request: mp_malloc_sync 00:04:00.673 EAL: No shared files mode enabled, IPC is disabled 00:04:00.673 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.583 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.583 EAL: request: mp_malloc_sync 00:04:02.583 EAL: No shared files mode enabled, IPC is disabled 00:04:02.583 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:04.492 passed 00:04:04.493 00:04:04.493 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.493 suites 1 1 n/a 0 0 00:04:04.493 tests 2 2 2 0 0 00:04:04.493 asserts 5964 5964 5964 0 n/a 00:04:04.493 00:04:04.493 Elapsed time = 7.778 seconds 00:04:04.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.493 EAL: request: mp_malloc_sync 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: Heap on socket 0 was shrunk by 2MB 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 EAL: No shared files mode enabled, IPC is disabled 00:04:04.493 00:04:04.493 real 0m8.097s 00:04:04.493 user 0m7.143s 00:04:04.493 sys 0m0.803s 00:04:04.493 08:41:42 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.493 08:41:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:04.493 ************************************ 00:04:04.493 END TEST env_vtophys 00:04:04.493 ************************************ 00:04:04.493 08:41:42 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.493 08:41:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.493 08:41:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.493 08:41:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.493 ************************************ 00:04:04.493 START TEST env_pci 00:04:04.493 ************************************ 00:04:04.493 08:41:42 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:04.493 00:04:04.493 00:04:04.493 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.493 http://cunit.sourceforge.net/ 00:04:04.493 00:04:04.493 00:04:04.493 Suite: pci 00:04:04.493 Test: pci_hook ...[2024-09-28 08:41:42.229126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56672 has claimed it 00:04:04.493 passed 00:04:04.493 00:04:04.493 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.493 suites 1 1 n/a 0 0 00:04:04.493 tests 1 1 1 0 0 00:04:04.493 asserts 25 25 25 0 n/a 00:04:04.493 00:04:04.493 Elapsed time = 0.006 seconds 00:04:04.493 EAL: Cannot find device (10000:00:01.0) 00:04:04.493 EAL: Failed to attach device on primary process 00:04:04.493 00:04:04.493 real 0m0.099s 00:04:04.493 user 0m0.043s 00:04:04.493 sys 0m0.055s 00:04:04.493 08:41:42 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.493 08:41:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:04.493 ************************************ 00:04:04.493 END TEST env_pci 00:04:04.493 ************************************ 00:04:04.493 08:41:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:04.493 08:41:42 env -- env/env.sh@15 -- # uname 00:04:04.493 08:41:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:04.493 08:41:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:04.493 08:41:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.493 08:41:42 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:04.493 08:41:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.493 08:41:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.493 ************************************ 00:04:04.493 START TEST env_dpdk_post_init 00:04:04.493 ************************************ 00:04:04.493 08:41:42 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:04.493 EAL: Detected CPU lcores: 10 00:04:04.493 EAL: Detected NUMA nodes: 1 00:04:04.493 EAL: Detected shared linkage of DPDK 00:04:04.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:04.493 EAL: Selected IOVA mode 'PA' 00:04:04.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:04.753 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:04.753 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:04.753 Starting DPDK initialization... 00:04:04.753 Starting SPDK post initialization... 00:04:04.753 SPDK NVMe probe 00:04:04.753 Attaching to 0000:00:10.0 00:04:04.753 Attaching to 0000:00:11.0 00:04:04.753 Attached to 0000:00:10.0 00:04:04.753 Attached to 0000:00:11.0 00:04:04.753 Cleaning up... 00:04:04.753 00:04:04.753 real 0m0.285s 00:04:04.753 user 0m0.089s 00:04:04.753 sys 0m0.096s 00:04:04.753 08:41:42 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.753 08:41:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.753 ************************************ 00:04:04.753 END TEST env_dpdk_post_init 00:04:04.753 ************************************ 00:04:04.753 08:41:42 env -- env/env.sh@26 -- # uname 00:04:04.753 08:41:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:04.753 08:41:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:04.753 08:41:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.753 08:41:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.753 08:41:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.753 ************************************ 00:04:04.753 START TEST env_mem_callbacks 00:04:04.753 ************************************ 00:04:04.753 08:41:42 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:05.013 EAL: Detected CPU lcores: 10 00:04:05.013 EAL: Detected NUMA nodes: 1 00:04:05.013 EAL: Detected shared linkage of DPDK 00:04:05.013 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.013 EAL: Selected IOVA mode 'PA' 00:04:05.013 00:04:05.013 00:04:05.013 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.013 http://cunit.sourceforge.net/ 00:04:05.013 00:04:05.013 00:04:05.013 Suite: memory 00:04:05.013 Test: test ... 00:04:05.013 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.013 register 0x200000200000 2097152 00:04:05.013 malloc 3145728 00:04:05.013 register 0x200000400000 4194304 00:04:05.013 buf 0x2000004fffc0 len 3145728 PASSED 00:04:05.013 malloc 64 00:04:05.013 buf 0x2000004ffec0 len 64 PASSED 00:04:05.013 malloc 4194304 00:04:05.013 register 0x200000800000 6291456 00:04:05.013 buf 0x2000009fffc0 len 4194304 PASSED 00:04:05.013 free 0x2000004fffc0 3145728 00:04:05.013 free 0x2000004ffec0 64 00:04:05.013 unregister 0x200000400000 4194304 PASSED 00:04:05.013 free 0x2000009fffc0 4194304 00:04:05.013 unregister 0x200000800000 6291456 PASSED 00:04:05.013 malloc 8388608 00:04:05.013 register 0x200000400000 10485760 00:04:05.013 buf 0x2000005fffc0 len 8388608 PASSED 00:04:05.013 free 0x2000005fffc0 8388608 00:04:05.013 unregister 0x200000400000 10485760 PASSED 00:04:05.013 passed 00:04:05.013 00:04:05.013 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.013 suites 1 1 n/a 0 0 00:04:05.013 tests 1 1 1 0 0 00:04:05.013 asserts 15 15 15 0 n/a 00:04:05.013 00:04:05.013 Elapsed time = 0.086 seconds 00:04:05.013 00:04:05.013 real 0m0.295s 00:04:05.013 user 0m0.115s 00:04:05.013 sys 0m0.078s 00:04:05.013 08:41:43 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.013 08:41:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:05.013 ************************************ 00:04:05.013 END TEST env_mem_callbacks 00:04:05.013 ************************************ 00:04:05.272 00:04:05.272 real 0m9.616s 00:04:05.272 user 0m7.842s 00:04:05.272 sys 0m1.423s 00:04:05.272 08:41:43 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.272 08:41:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.272 ************************************ 00:04:05.272 END TEST env 00:04:05.273 ************************************ 00:04:05.273 08:41:43 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.273 08:41:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.273 08:41:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.273 08:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.273 ************************************ 00:04:05.273 START TEST rpc 00:04:05.273 ************************************ 00:04:05.273 08:41:43 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:05.273 * Looking for test storage... 00:04:05.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.273 08:41:43 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:05.273 08:41:43 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:05.273 08:41:43 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.533 08:41:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.533 08:41:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.533 08:41:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.533 08:41:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.533 08:41:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.533 08:41:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.533 08:41:43 rpc -- scripts/common.sh@345 -- # : 1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.533 08:41:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.533 08:41:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.533 08:41:43 rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.533 08:41:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.533 08:41:43 rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.533 08:41:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.533 08:41:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.533 08:41:43 rpc -- scripts/common.sh@368 -- # return 0 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:05.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.533 --rc genhtml_branch_coverage=1 00:04:05.533 --rc genhtml_function_coverage=1 00:04:05.533 --rc genhtml_legend=1 00:04:05.533 --rc geninfo_all_blocks=1 00:04:05.533 --rc geninfo_unexecuted_blocks=1 00:04:05.533 00:04:05.533 ' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:05.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.533 --rc genhtml_branch_coverage=1 00:04:05.533 --rc genhtml_function_coverage=1 00:04:05.533 --rc genhtml_legend=1 00:04:05.533 --rc geninfo_all_blocks=1 00:04:05.533 --rc geninfo_unexecuted_blocks=1 00:04:05.533 00:04:05.533 ' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:05.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.533 --rc genhtml_branch_coverage=1 00:04:05.533 --rc genhtml_function_coverage=1 00:04:05.533 --rc genhtml_legend=1 00:04:05.533 --rc geninfo_all_blocks=1 00:04:05.533 --rc geninfo_unexecuted_blocks=1 00:04:05.533 00:04:05.533 ' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:05.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.533 --rc genhtml_branch_coverage=1 00:04:05.533 --rc genhtml_function_coverage=1 00:04:05.533 --rc genhtml_legend=1 00:04:05.533 --rc geninfo_all_blocks=1 00:04:05.533 --rc geninfo_unexecuted_blocks=1 00:04:05.533 00:04:05.533 ' 00:04:05.533 08:41:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56799 00:04:05.533 08:41:43 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:05.533 08:41:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.533 08:41:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56799 00:04:05.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@831 -- # '[' -z 56799 ']' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.533 08:41:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.533 [2024-09-28 08:41:43.454075] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:05.533 [2024-09-28 08:41:43.454713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56799 ] 00:04:05.793 [2024-09-28 08:41:43.625537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.052 [2024-09-28 08:41:43.850917] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.052 [2024-09-28 08:41:43.850982] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56799' to capture a snapshot of events at runtime. 00:04:06.052 [2024-09-28 08:41:43.850992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.052 [2024-09-28 08:41:43.851008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.052 [2024-09-28 08:41:43.851018] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56799 for offline analysis/debug. 00:04:06.052 [2024-09-28 08:41:43.851057] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.991 08:41:44 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.991 08:41:44 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:06.991 08:41:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.991 08:41:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.991 08:41:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.991 08:41:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.991 08:41:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.991 08:41:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.991 08:41:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.991 ************************************ 00:04:06.991 START TEST rpc_integrity 00:04:06.991 ************************************ 00:04:06.991 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:06.991 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:06.991 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.991 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.991 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.991 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:06.991 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.992 { 00:04:06.992 "name": "Malloc0", 00:04:06.992 "aliases": [ 00:04:06.992 "c1e78dcc-9c87-4715-b32d-0a04f8884463" 00:04:06.992 ], 00:04:06.992 "product_name": "Malloc disk", 00:04:06.992 "block_size": 512, 00:04:06.992 "num_blocks": 16384, 00:04:06.992 "uuid": "c1e78dcc-9c87-4715-b32d-0a04f8884463", 00:04:06.992 "assigned_rate_limits": { 00:04:06.992 "rw_ios_per_sec": 0, 00:04:06.992 "rw_mbytes_per_sec": 0, 00:04:06.992 "r_mbytes_per_sec": 0, 00:04:06.992 "w_mbytes_per_sec": 0 00:04:06.992 }, 00:04:06.992 "claimed": false, 00:04:06.992 "zoned": false, 00:04:06.992 "supported_io_types": { 00:04:06.992 "read": true, 00:04:06.992 "write": true, 00:04:06.992 "unmap": true, 00:04:06.992 "flush": true, 00:04:06.992 "reset": true, 00:04:06.992 "nvme_admin": false, 00:04:06.992 "nvme_io": false, 00:04:06.992 "nvme_io_md": false, 00:04:06.992 "write_zeroes": true, 00:04:06.992 "zcopy": true, 00:04:06.992 "get_zone_info": false, 00:04:06.992 "zone_management": false, 00:04:06.992 "zone_append": false, 00:04:06.992 "compare": false, 00:04:06.992 "compare_and_write": false, 00:04:06.992 "abort": true, 00:04:06.992 "seek_hole": false, 00:04:06.992 "seek_data": false, 00:04:06.992 "copy": true, 00:04:06.992 "nvme_iov_md": false 00:04:06.992 }, 00:04:06.992 "memory_domains": [ 00:04:06.992 { 00:04:06.992 "dma_device_id": "system", 00:04:06.992 "dma_device_type": 1 00:04:06.992 }, 00:04:06.992 { 00:04:06.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.992 "dma_device_type": 2 00:04:06.992 } 00:04:06.992 ], 00:04:06.992 "driver_specific": {} 00:04:06.992 } 00:04:06.992 ]' 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.992 [2024-09-28 08:41:44.904842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:06.992 [2024-09-28 08:41:44.904947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.992 [2024-09-28 08:41:44.904976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:06.992 [2024-09-28 08:41:44.904989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.992 [2024-09-28 08:41:44.907384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.992 [2024-09-28 08:41:44.907436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.992 Passthru0 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.992 { 00:04:06.992 "name": "Malloc0", 00:04:06.992 "aliases": [ 00:04:06.992 "c1e78dcc-9c87-4715-b32d-0a04f8884463" 00:04:06.992 ], 00:04:06.992 "product_name": "Malloc disk", 00:04:06.992 "block_size": 512, 00:04:06.992 "num_blocks": 16384, 00:04:06.992 "uuid": "c1e78dcc-9c87-4715-b32d-0a04f8884463", 00:04:06.992 "assigned_rate_limits": { 00:04:06.992 "rw_ios_per_sec": 0, 00:04:06.992 "rw_mbytes_per_sec": 0, 00:04:06.992 "r_mbytes_per_sec": 0, 00:04:06.992 "w_mbytes_per_sec": 0 00:04:06.992 }, 00:04:06.992 "claimed": true, 00:04:06.992 "claim_type": "exclusive_write", 00:04:06.992 "zoned": false, 00:04:06.992 "supported_io_types": { 00:04:06.992 "read": true, 00:04:06.992 "write": true, 00:04:06.992 "unmap": true, 00:04:06.992 "flush": true, 00:04:06.992 "reset": true, 00:04:06.992 "nvme_admin": false, 00:04:06.992 "nvme_io": false, 00:04:06.992 "nvme_io_md": false, 00:04:06.992 "write_zeroes": true, 00:04:06.992 "zcopy": true, 00:04:06.992 "get_zone_info": false, 00:04:06.992 "zone_management": false, 00:04:06.992 "zone_append": false, 00:04:06.992 "compare": false, 00:04:06.992 "compare_and_write": false, 00:04:06.992 "abort": true, 00:04:06.992 "seek_hole": false, 00:04:06.992 "seek_data": false, 00:04:06.992 "copy": true, 00:04:06.992 "nvme_iov_md": false 00:04:06.992 }, 00:04:06.992 "memory_domains": [ 00:04:06.992 { 00:04:06.992 "dma_device_id": "system", 00:04:06.992 "dma_device_type": 1 00:04:06.992 }, 00:04:06.992 { 00:04:06.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.992 "dma_device_type": 2 00:04:06.992 } 00:04:06.992 ], 00:04:06.992 "driver_specific": {} 00:04:06.992 }, 00:04:06.992 { 00:04:06.992 "name": "Passthru0", 00:04:06.992 "aliases": [ 00:04:06.992 "1d5fc576-7659-5ca5-8d57-f8c2590ddbf0" 00:04:06.992 ], 00:04:06.992 "product_name": "passthru", 00:04:06.992 "block_size": 512, 00:04:06.992 "num_blocks": 16384, 00:04:06.992 "uuid": "1d5fc576-7659-5ca5-8d57-f8c2590ddbf0", 00:04:06.992 "assigned_rate_limits": { 00:04:06.992 "rw_ios_per_sec": 0, 00:04:06.992 "rw_mbytes_per_sec": 0, 00:04:06.992 "r_mbytes_per_sec": 0, 00:04:06.992 "w_mbytes_per_sec": 0 00:04:06.992 }, 00:04:06.992 "claimed": false, 00:04:06.992 "zoned": false, 00:04:06.992 "supported_io_types": { 00:04:06.992 "read": true, 00:04:06.992 "write": true, 00:04:06.992 "unmap": true, 00:04:06.992 "flush": true, 00:04:06.992 "reset": true, 00:04:06.992 "nvme_admin": false, 00:04:06.992 "nvme_io": false, 00:04:06.992 "nvme_io_md": false, 00:04:06.992 "write_zeroes": true, 00:04:06.992 "zcopy": true, 00:04:06.992 "get_zone_info": false, 00:04:06.992 "zone_management": false, 00:04:06.992 "zone_append": false, 00:04:06.992 "compare": false, 00:04:06.992 "compare_and_write": false, 00:04:06.992 "abort": true, 00:04:06.992 "seek_hole": false, 00:04:06.992 "seek_data": false, 00:04:06.992 "copy": true, 00:04:06.992 "nvme_iov_md": false 00:04:06.992 }, 00:04:06.992 "memory_domains": [ 00:04:06.992 { 00:04:06.992 "dma_device_id": "system", 00:04:06.992 "dma_device_type": 1 00:04:06.992 }, 00:04:06.992 { 00:04:06.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.992 "dma_device_type": 2 00:04:06.992 } 00:04:06.992 ], 00:04:06.992 "driver_specific": { 00:04:06.992 "passthru": { 00:04:06.992 "name": "Passthru0", 00:04:06.992 "base_bdev_name": "Malloc0" 00:04:06.992 } 00:04:06.992 } 00:04:06.992 } 00:04:06.992 ]' 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.992 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:06.992 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.252 08:41:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.252 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.252 08:41:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.252 08:41:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.252 08:41:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.252 08:41:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.252 ************************************ 00:04:07.252 END TEST rpc_integrity 00:04:07.252 ************************************ 00:04:07.252 08:41:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.252 00:04:07.252 real 0m0.343s 00:04:07.252 user 0m0.181s 00:04:07.252 sys 0m0.057s 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.252 08:41:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.252 08:41:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.252 08:41:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.252 08:41:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 ************************************ 00:04:07.252 START TEST rpc_plugins 00:04:07.252 ************************************ 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.252 { 00:04:07.252 "name": "Malloc1", 00:04:07.252 "aliases": [ 00:04:07.252 "257eeb5a-4085-4d94-9938-938f5d664e20" 00:04:07.252 ], 00:04:07.252 "product_name": "Malloc disk", 00:04:07.252 "block_size": 4096, 00:04:07.252 "num_blocks": 256, 00:04:07.252 "uuid": "257eeb5a-4085-4d94-9938-938f5d664e20", 00:04:07.252 "assigned_rate_limits": { 00:04:07.252 "rw_ios_per_sec": 0, 00:04:07.252 "rw_mbytes_per_sec": 0, 00:04:07.252 "r_mbytes_per_sec": 0, 00:04:07.252 "w_mbytes_per_sec": 0 00:04:07.252 }, 00:04:07.252 "claimed": false, 00:04:07.252 "zoned": false, 00:04:07.252 "supported_io_types": { 00:04:07.252 "read": true, 00:04:07.252 "write": true, 00:04:07.252 "unmap": true, 00:04:07.252 "flush": true, 00:04:07.252 "reset": true, 00:04:07.252 "nvme_admin": false, 00:04:07.252 "nvme_io": false, 00:04:07.252 "nvme_io_md": false, 00:04:07.252 "write_zeroes": true, 00:04:07.252 "zcopy": true, 00:04:07.252 "get_zone_info": false, 00:04:07.252 "zone_management": false, 00:04:07.252 "zone_append": false, 00:04:07.252 "compare": false, 00:04:07.252 "compare_and_write": false, 00:04:07.252 "abort": true, 00:04:07.252 "seek_hole": false, 00:04:07.252 "seek_data": false, 00:04:07.252 "copy": true, 00:04:07.252 "nvme_iov_md": false 00:04:07.252 }, 00:04:07.252 "memory_domains": [ 00:04:07.252 { 00:04:07.252 "dma_device_id": "system", 00:04:07.252 "dma_device_type": 1 00:04:07.252 }, 00:04:07.252 { 00:04:07.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.252 "dma_device_type": 2 00:04:07.252 } 00:04:07.252 ], 00:04:07.252 "driver_specific": {} 00:04:07.252 } 00:04:07.252 ]' 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.252 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.252 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.512 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.512 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.512 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.512 ************************************ 00:04:07.512 END TEST rpc_plugins 00:04:07.512 ************************************ 00:04:07.512 08:41:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.512 00:04:07.512 real 0m0.166s 00:04:07.512 user 0m0.093s 00:04:07.512 sys 0m0.029s 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.512 08:41:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 08:41:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.512 08:41:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.512 08:41:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.512 08:41:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 ************************************ 00:04:07.512 START TEST rpc_trace_cmd_test 00:04:07.512 ************************************ 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.512 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56799", 00:04:07.512 "tpoint_group_mask": "0x8", 00:04:07.512 "iscsi_conn": { 00:04:07.512 "mask": "0x2", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "scsi": { 00:04:07.512 "mask": "0x4", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "bdev": { 00:04:07.512 "mask": "0x8", 00:04:07.512 "tpoint_mask": "0xffffffffffffffff" 00:04:07.512 }, 00:04:07.512 "nvmf_rdma": { 00:04:07.512 "mask": "0x10", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "nvmf_tcp": { 00:04:07.512 "mask": "0x20", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "ftl": { 00:04:07.512 "mask": "0x40", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "blobfs": { 00:04:07.512 "mask": "0x80", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "dsa": { 00:04:07.512 "mask": "0x200", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "thread": { 00:04:07.512 "mask": "0x400", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "nvme_pcie": { 00:04:07.512 "mask": "0x800", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "iaa": { 00:04:07.512 "mask": "0x1000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "nvme_tcp": { 00:04:07.512 "mask": "0x2000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "bdev_nvme": { 00:04:07.512 "mask": "0x4000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "sock": { 00:04:07.512 "mask": "0x8000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "blob": { 00:04:07.512 "mask": "0x10000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 }, 00:04:07.512 "bdev_raid": { 00:04:07.512 "mask": "0x20000", 00:04:07.512 "tpoint_mask": "0x0" 00:04:07.512 } 00:04:07.512 }' 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.512 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.772 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.772 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.772 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.772 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.772 ************************************ 00:04:07.772 END TEST rpc_trace_cmd_test 00:04:07.772 ************************************ 00:04:07.772 08:41:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.772 00:04:07.772 real 0m0.248s 00:04:07.772 user 0m0.194s 00:04:07.773 sys 0m0.040s 00:04:07.773 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.773 08:41:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.773 08:41:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.773 08:41:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.773 08:41:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.773 08:41:45 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.773 08:41:45 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.773 08:41:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.773 ************************************ 00:04:07.773 START TEST rpc_daemon_integrity 00:04:07.773 ************************************ 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.773 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.037 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.037 { 00:04:08.037 "name": "Malloc2", 00:04:08.037 "aliases": [ 00:04:08.037 "fb172a91-a633-45f4-9c81-8719a2281e08" 00:04:08.038 ], 00:04:08.038 "product_name": "Malloc disk", 00:04:08.038 "block_size": 512, 00:04:08.038 "num_blocks": 16384, 00:04:08.038 "uuid": "fb172a91-a633-45f4-9c81-8719a2281e08", 00:04:08.038 "assigned_rate_limits": { 00:04:08.038 "rw_ios_per_sec": 0, 00:04:08.038 "rw_mbytes_per_sec": 0, 00:04:08.038 "r_mbytes_per_sec": 0, 00:04:08.038 "w_mbytes_per_sec": 0 00:04:08.038 }, 00:04:08.038 "claimed": false, 00:04:08.038 "zoned": false, 00:04:08.038 "supported_io_types": { 00:04:08.038 "read": true, 00:04:08.038 "write": true, 00:04:08.038 "unmap": true, 00:04:08.038 "flush": true, 00:04:08.038 "reset": true, 00:04:08.038 "nvme_admin": false, 00:04:08.038 "nvme_io": false, 00:04:08.038 "nvme_io_md": false, 00:04:08.038 "write_zeroes": true, 00:04:08.038 "zcopy": true, 00:04:08.038 "get_zone_info": false, 00:04:08.038 "zone_management": false, 00:04:08.038 "zone_append": false, 00:04:08.038 "compare": false, 00:04:08.038 "compare_and_write": false, 00:04:08.038 "abort": true, 00:04:08.038 "seek_hole": false, 00:04:08.038 "seek_data": false, 00:04:08.038 "copy": true, 00:04:08.038 "nvme_iov_md": false 00:04:08.038 }, 00:04:08.038 "memory_domains": [ 00:04:08.038 { 00:04:08.038 "dma_device_id": "system", 00:04:08.038 "dma_device_type": 1 00:04:08.038 }, 00:04:08.038 { 00:04:08.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.038 "dma_device_type": 2 00:04:08.038 } 00:04:08.038 ], 00:04:08.038 "driver_specific": {} 00:04:08.038 } 00:04:08.038 ]' 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.038 [2024-09-28 08:41:45.847526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.038 [2024-09-28 08:41:45.847623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.038 [2024-09-28 08:41:45.847664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:08.038 [2024-09-28 08:41:45.847677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.038 [2024-09-28 08:41:45.849945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.038 [2024-09-28 08:41:45.849997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.038 Passthru0 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.038 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.038 { 00:04:08.038 "name": "Malloc2", 00:04:08.038 "aliases": [ 00:04:08.038 "fb172a91-a633-45f4-9c81-8719a2281e08" 00:04:08.038 ], 00:04:08.038 "product_name": "Malloc disk", 00:04:08.038 "block_size": 512, 00:04:08.038 "num_blocks": 16384, 00:04:08.038 "uuid": "fb172a91-a633-45f4-9c81-8719a2281e08", 00:04:08.038 "assigned_rate_limits": { 00:04:08.038 "rw_ios_per_sec": 0, 00:04:08.038 "rw_mbytes_per_sec": 0, 00:04:08.038 "r_mbytes_per_sec": 0, 00:04:08.038 "w_mbytes_per_sec": 0 00:04:08.038 }, 00:04:08.038 "claimed": true, 00:04:08.038 "claim_type": "exclusive_write", 00:04:08.038 "zoned": false, 00:04:08.038 "supported_io_types": { 00:04:08.038 "read": true, 00:04:08.038 "write": true, 00:04:08.038 "unmap": true, 00:04:08.038 "flush": true, 00:04:08.038 "reset": true, 00:04:08.038 "nvme_admin": false, 00:04:08.038 "nvme_io": false, 00:04:08.038 "nvme_io_md": false, 00:04:08.038 "write_zeroes": true, 00:04:08.038 "zcopy": true, 00:04:08.038 "get_zone_info": false, 00:04:08.038 "zone_management": false, 00:04:08.038 "zone_append": false, 00:04:08.038 "compare": false, 00:04:08.038 "compare_and_write": false, 00:04:08.038 "abort": true, 00:04:08.038 "seek_hole": false, 00:04:08.038 "seek_data": false, 00:04:08.038 "copy": true, 00:04:08.038 "nvme_iov_md": false 00:04:08.038 }, 00:04:08.038 "memory_domains": [ 00:04:08.038 { 00:04:08.038 "dma_device_id": "system", 00:04:08.038 "dma_device_type": 1 00:04:08.038 }, 00:04:08.038 { 00:04:08.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.038 "dma_device_type": 2 00:04:08.038 } 00:04:08.038 ], 00:04:08.038 "driver_specific": {} 00:04:08.038 }, 00:04:08.038 { 00:04:08.038 "name": "Passthru0", 00:04:08.038 "aliases": [ 00:04:08.038 "de136601-63e7-5f1e-8d2b-42199b86aa94" 00:04:08.038 ], 00:04:08.038 "product_name": "passthru", 00:04:08.038 "block_size": 512, 00:04:08.038 "num_blocks": 16384, 00:04:08.038 "uuid": "de136601-63e7-5f1e-8d2b-42199b86aa94", 00:04:08.038 "assigned_rate_limits": { 00:04:08.038 "rw_ios_per_sec": 0, 00:04:08.038 "rw_mbytes_per_sec": 0, 00:04:08.038 "r_mbytes_per_sec": 0, 00:04:08.038 "w_mbytes_per_sec": 0 00:04:08.038 }, 00:04:08.038 "claimed": false, 00:04:08.038 "zoned": false, 00:04:08.038 "supported_io_types": { 00:04:08.038 "read": true, 00:04:08.038 "write": true, 00:04:08.038 "unmap": true, 00:04:08.038 "flush": true, 00:04:08.038 "reset": true, 00:04:08.038 "nvme_admin": false, 00:04:08.038 "nvme_io": false, 00:04:08.039 "nvme_io_md": false, 00:04:08.039 "write_zeroes": true, 00:04:08.039 "zcopy": true, 00:04:08.039 "get_zone_info": false, 00:04:08.039 "zone_management": false, 00:04:08.039 "zone_append": false, 00:04:08.039 "compare": false, 00:04:08.039 "compare_and_write": false, 00:04:08.039 "abort": true, 00:04:08.039 "seek_hole": false, 00:04:08.039 "seek_data": false, 00:04:08.039 "copy": true, 00:04:08.039 "nvme_iov_md": false 00:04:08.039 }, 00:04:08.039 "memory_domains": [ 00:04:08.039 { 00:04:08.039 "dma_device_id": "system", 00:04:08.039 "dma_device_type": 1 00:04:08.039 }, 00:04:08.039 { 00:04:08.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.039 "dma_device_type": 2 00:04:08.039 } 00:04:08.039 ], 00:04:08.039 "driver_specific": { 00:04:08.039 "passthru": { 00:04:08.039 "name": "Passthru0", 00:04:08.039 "base_bdev_name": "Malloc2" 00:04:08.039 } 00:04:08.039 } 00:04:08.039 } 00:04:08.039 ]' 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.039 08:41:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.305 ************************************ 00:04:08.305 END TEST rpc_daemon_integrity 00:04:08.305 ************************************ 00:04:08.305 08:41:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.305 00:04:08.305 real 0m0.345s 00:04:08.305 user 0m0.189s 00:04:08.305 sys 0m0.048s 00:04:08.305 08:41:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.305 08:41:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.305 08:41:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.305 08:41:46 rpc -- rpc/rpc.sh@84 -- # killprocess 56799 00:04:08.305 08:41:46 rpc -- common/autotest_common.sh@950 -- # '[' -z 56799 ']' 00:04:08.305 08:41:46 rpc -- common/autotest_common.sh@954 -- # kill -0 56799 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56799 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.306 killing process with pid 56799 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56799' 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@969 -- # kill 56799 00:04:08.306 08:41:46 rpc -- common/autotest_common.sh@974 -- # wait 56799 00:04:10.848 00:04:10.848 real 0m5.438s 00:04:10.848 user 0m5.938s 00:04:10.848 sys 0m0.938s 00:04:10.848 08:41:48 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.848 08:41:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.848 ************************************ 00:04:10.848 END TEST rpc 00:04:10.848 ************************************ 00:04:10.848 08:41:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.848 08:41:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.848 08:41:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.848 08:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:10.848 ************************************ 00:04:10.848 START TEST skip_rpc 00:04:10.848 ************************************ 00:04:10.848 08:41:48 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:10.848 * Looking for test storage... 00:04:10.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.848 08:41:48 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:10.848 08:41:48 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:10.848 08:41:48 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:10.848 08:41:48 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.848 08:41:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.109 08:41:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.109 08:41:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.109 08:41:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.109 08:41:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:11.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.109 --rc genhtml_branch_coverage=1 00:04:11.109 --rc genhtml_function_coverage=1 00:04:11.109 --rc genhtml_legend=1 00:04:11.109 --rc geninfo_all_blocks=1 00:04:11.109 --rc geninfo_unexecuted_blocks=1 00:04:11.109 00:04:11.109 ' 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:11.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.109 --rc genhtml_branch_coverage=1 00:04:11.109 --rc genhtml_function_coverage=1 00:04:11.109 --rc genhtml_legend=1 00:04:11.109 --rc geninfo_all_blocks=1 00:04:11.109 --rc geninfo_unexecuted_blocks=1 00:04:11.109 00:04:11.109 ' 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:11.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.109 --rc genhtml_branch_coverage=1 00:04:11.109 --rc genhtml_function_coverage=1 00:04:11.109 --rc genhtml_legend=1 00:04:11.109 --rc geninfo_all_blocks=1 00:04:11.109 --rc geninfo_unexecuted_blocks=1 00:04:11.109 00:04:11.109 ' 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:11.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.109 --rc genhtml_branch_coverage=1 00:04:11.109 --rc genhtml_function_coverage=1 00:04:11.109 --rc genhtml_legend=1 00:04:11.109 --rc geninfo_all_blocks=1 00:04:11.109 --rc geninfo_unexecuted_blocks=1 00:04:11.109 00:04:11.109 ' 00:04:11.109 08:41:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.109 08:41:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.109 08:41:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.109 08:41:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.109 ************************************ 00:04:11.109 START TEST skip_rpc 00:04:11.109 ************************************ 00:04:11.109 08:41:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:11.109 08:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57028 00:04:11.109 08:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.109 08:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.109 08:41:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.109 [2024-09-28 08:41:48.972230] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:11.109 [2024-09-28 08:41:48.972345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57028 ] 00:04:11.369 [2024-09-28 08:41:49.142455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.369 [2024-09-28 08:41:49.353872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57028 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57028 ']' 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57028 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57028 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.645 killing process with pid 57028 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57028' 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57028 00:04:16.645 08:41:53 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57028 00:04:18.551 00:04:18.551 real 0m7.461s 00:04:18.551 user 0m6.980s 00:04:18.551 sys 0m0.404s 00:04:18.551 08:41:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.551 08:41:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.551 ************************************ 00:04:18.551 END TEST skip_rpc 00:04:18.551 ************************************ 00:04:18.551 08:41:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.551 08:41:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.551 08:41:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.551 08:41:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.551 ************************************ 00:04:18.551 START TEST skip_rpc_with_json 00:04:18.551 ************************************ 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57139 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57139 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57139 ']' 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.551 08:41:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.551 [2024-09-28 08:41:56.495769] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:18.551 [2024-09-28 08:41:56.495926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57139 ] 00:04:18.811 [2024-09-28 08:41:56.663842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.070 [2024-09-28 08:41:56.849228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.011 [2024-09-28 08:41:57.662419] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.011 request: 00:04:20.011 { 00:04:20.011 "trtype": "tcp", 00:04:20.011 "method": "nvmf_get_transports", 00:04:20.011 "req_id": 1 00:04:20.011 } 00:04:20.011 Got JSON-RPC error response 00:04:20.011 response: 00:04:20.011 { 00:04:20.011 "code": -19, 00:04:20.011 "message": "No such device" 00:04:20.011 } 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.011 [2024-09-28 08:41:57.674501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.011 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.011 { 00:04:20.011 "subsystems": [ 00:04:20.011 { 00:04:20.011 "subsystem": "fsdev", 00:04:20.011 "config": [ 00:04:20.011 { 00:04:20.011 "method": "fsdev_set_opts", 00:04:20.011 "params": { 00:04:20.011 "fsdev_io_pool_size": 65535, 00:04:20.011 "fsdev_io_cache_size": 256 00:04:20.011 } 00:04:20.011 } 00:04:20.011 ] 00:04:20.011 }, 00:04:20.011 { 00:04:20.011 "subsystem": "keyring", 00:04:20.011 "config": [] 00:04:20.011 }, 00:04:20.011 { 00:04:20.011 "subsystem": "iobuf", 00:04:20.011 "config": [ 00:04:20.011 { 00:04:20.011 "method": "iobuf_set_options", 00:04:20.011 "params": { 00:04:20.011 "small_pool_count": 8192, 00:04:20.011 "large_pool_count": 1024, 00:04:20.011 "small_bufsize": 8192, 00:04:20.011 "large_bufsize": 135168 00:04:20.011 } 00:04:20.011 } 00:04:20.011 ] 00:04:20.011 }, 00:04:20.011 { 00:04:20.011 "subsystem": "sock", 00:04:20.011 "config": [ 00:04:20.011 { 00:04:20.011 "method": "sock_set_default_impl", 00:04:20.011 "params": { 00:04:20.011 "impl_name": "posix" 00:04:20.011 } 00:04:20.011 }, 00:04:20.011 { 00:04:20.011 "method": "sock_impl_set_options", 00:04:20.011 "params": { 00:04:20.011 "impl_name": "ssl", 00:04:20.011 "recv_buf_size": 4096, 00:04:20.011 "send_buf_size": 4096, 00:04:20.011 "enable_recv_pipe": true, 00:04:20.011 "enable_quickack": false, 00:04:20.011 "enable_placement_id": 0, 00:04:20.011 "enable_zerocopy_send_server": true, 00:04:20.011 "enable_zerocopy_send_client": false, 00:04:20.011 "zerocopy_threshold": 0, 00:04:20.011 "tls_version": 0, 00:04:20.011 "enable_ktls": false 00:04:20.011 } 00:04:20.011 }, 00:04:20.011 { 00:04:20.011 "method": "sock_impl_set_options", 00:04:20.011 "params": { 00:04:20.011 "impl_name": "posix", 00:04:20.012 "recv_buf_size": 2097152, 00:04:20.012 "send_buf_size": 2097152, 00:04:20.012 "enable_recv_pipe": true, 00:04:20.012 "enable_quickack": false, 00:04:20.012 "enable_placement_id": 0, 00:04:20.012 "enable_zerocopy_send_server": true, 00:04:20.012 "enable_zerocopy_send_client": false, 00:04:20.012 "zerocopy_threshold": 0, 00:04:20.012 "tls_version": 0, 00:04:20.012 "enable_ktls": false 00:04:20.012 } 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "vmd", 00:04:20.012 "config": [] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "accel", 00:04:20.012 "config": [ 00:04:20.012 { 00:04:20.012 "method": "accel_set_options", 00:04:20.012 "params": { 00:04:20.012 "small_cache_size": 128, 00:04:20.012 "large_cache_size": 16, 00:04:20.012 "task_count": 2048, 00:04:20.012 "sequence_count": 2048, 00:04:20.012 "buf_count": 2048 00:04:20.012 } 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "bdev", 00:04:20.012 "config": [ 00:04:20.012 { 00:04:20.012 "method": "bdev_set_options", 00:04:20.012 "params": { 00:04:20.012 "bdev_io_pool_size": 65535, 00:04:20.012 "bdev_io_cache_size": 256, 00:04:20.012 "bdev_auto_examine": true, 00:04:20.012 "iobuf_small_cache_size": 128, 00:04:20.012 "iobuf_large_cache_size": 16 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "bdev_raid_set_options", 00:04:20.012 "params": { 00:04:20.012 "process_window_size_kb": 1024, 00:04:20.012 "process_max_bandwidth_mb_sec": 0 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "bdev_iscsi_set_options", 00:04:20.012 "params": { 00:04:20.012 "timeout_sec": 30 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "bdev_nvme_set_options", 00:04:20.012 "params": { 00:04:20.012 "action_on_timeout": "none", 00:04:20.012 "timeout_us": 0, 00:04:20.012 "timeout_admin_us": 0, 00:04:20.012 "keep_alive_timeout_ms": 10000, 00:04:20.012 "arbitration_burst": 0, 00:04:20.012 "low_priority_weight": 0, 00:04:20.012 "medium_priority_weight": 0, 00:04:20.012 "high_priority_weight": 0, 00:04:20.012 "nvme_adminq_poll_period_us": 10000, 00:04:20.012 "nvme_ioq_poll_period_us": 0, 00:04:20.012 "io_queue_requests": 0, 00:04:20.012 "delay_cmd_submit": true, 00:04:20.012 "transport_retry_count": 4, 00:04:20.012 "bdev_retry_count": 3, 00:04:20.012 "transport_ack_timeout": 0, 00:04:20.012 "ctrlr_loss_timeout_sec": 0, 00:04:20.012 "reconnect_delay_sec": 0, 00:04:20.012 "fast_io_fail_timeout_sec": 0, 00:04:20.012 "disable_auto_failback": false, 00:04:20.012 "generate_uuids": false, 00:04:20.012 "transport_tos": 0, 00:04:20.012 "nvme_error_stat": false, 00:04:20.012 "rdma_srq_size": 0, 00:04:20.012 "io_path_stat": false, 00:04:20.012 "allow_accel_sequence": false, 00:04:20.012 "rdma_max_cq_size": 0, 00:04:20.012 "rdma_cm_event_timeout_ms": 0, 00:04:20.012 "dhchap_digests": [ 00:04:20.012 "sha256", 00:04:20.012 "sha384", 00:04:20.012 "sha512" 00:04:20.012 ], 00:04:20.012 "dhchap_dhgroups": [ 00:04:20.012 "null", 00:04:20.012 "ffdhe2048", 00:04:20.012 "ffdhe3072", 00:04:20.012 "ffdhe4096", 00:04:20.012 "ffdhe6144", 00:04:20.012 "ffdhe8192" 00:04:20.012 ] 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "bdev_nvme_set_hotplug", 00:04:20.012 "params": { 00:04:20.012 "period_us": 100000, 00:04:20.012 "enable": false 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "bdev_wait_for_examine" 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "scsi", 00:04:20.012 "config": null 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "scheduler", 00:04:20.012 "config": [ 00:04:20.012 { 00:04:20.012 "method": "framework_set_scheduler", 00:04:20.012 "params": { 00:04:20.012 "name": "static" 00:04:20.012 } 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "vhost_scsi", 00:04:20.012 "config": [] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "vhost_blk", 00:04:20.012 "config": [] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "ublk", 00:04:20.012 "config": [] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "nbd", 00:04:20.012 "config": [] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "nvmf", 00:04:20.012 "config": [ 00:04:20.012 { 00:04:20.012 "method": "nvmf_set_config", 00:04:20.012 "params": { 00:04:20.012 "discovery_filter": "match_any", 00:04:20.012 "admin_cmd_passthru": { 00:04:20.012 "identify_ctrlr": false 00:04:20.012 }, 00:04:20.012 "dhchap_digests": [ 00:04:20.012 "sha256", 00:04:20.012 "sha384", 00:04:20.012 "sha512" 00:04:20.012 ], 00:04:20.012 "dhchap_dhgroups": [ 00:04:20.012 "null", 00:04:20.012 "ffdhe2048", 00:04:20.012 "ffdhe3072", 00:04:20.012 "ffdhe4096", 00:04:20.012 "ffdhe6144", 00:04:20.012 "ffdhe8192" 00:04:20.012 ] 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "nvmf_set_max_subsystems", 00:04:20.012 "params": { 00:04:20.012 "max_subsystems": 1024 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "nvmf_set_crdt", 00:04:20.012 "params": { 00:04:20.012 "crdt1": 0, 00:04:20.012 "crdt2": 0, 00:04:20.012 "crdt3": 0 00:04:20.012 } 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "method": "nvmf_create_transport", 00:04:20.012 "params": { 00:04:20.012 "trtype": "TCP", 00:04:20.012 "max_queue_depth": 128, 00:04:20.012 "max_io_qpairs_per_ctrlr": 127, 00:04:20.012 "in_capsule_data_size": 4096, 00:04:20.012 "max_io_size": 131072, 00:04:20.012 "io_unit_size": 131072, 00:04:20.012 "max_aq_depth": 128, 00:04:20.012 "num_shared_buffers": 511, 00:04:20.012 "buf_cache_size": 4294967295, 00:04:20.012 "dif_insert_or_strip": false, 00:04:20.012 "zcopy": false, 00:04:20.012 "c2h_success": true, 00:04:20.012 "sock_priority": 0, 00:04:20.012 "abort_timeout_sec": 1, 00:04:20.012 "ack_timeout": 0, 00:04:20.012 "data_wr_pool_size": 0 00:04:20.012 } 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 }, 00:04:20.012 { 00:04:20.012 "subsystem": "iscsi", 00:04:20.012 "config": [ 00:04:20.012 { 00:04:20.012 "method": "iscsi_set_options", 00:04:20.012 "params": { 00:04:20.012 "node_base": "iqn.2016-06.io.spdk", 00:04:20.012 "max_sessions": 128, 00:04:20.012 "max_connections_per_session": 2, 00:04:20.012 "max_queue_depth": 64, 00:04:20.012 "default_time2wait": 2, 00:04:20.012 "default_time2retain": 20, 00:04:20.012 "first_burst_length": 8192, 00:04:20.012 "immediate_data": true, 00:04:20.012 "allow_duplicated_isid": false, 00:04:20.012 "error_recovery_level": 0, 00:04:20.012 "nop_timeout": 60, 00:04:20.012 "nop_in_interval": 30, 00:04:20.012 "disable_chap": false, 00:04:20.012 "require_chap": false, 00:04:20.012 "mutual_chap": false, 00:04:20.012 "chap_group": 0, 00:04:20.012 "max_large_datain_per_connection": 64, 00:04:20.012 "max_r2t_per_connection": 4, 00:04:20.012 "pdu_pool_size": 36864, 00:04:20.012 "immediate_data_pool_size": 16384, 00:04:20.012 "data_out_pool_size": 2048 00:04:20.012 } 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 } 00:04:20.012 ] 00:04:20.012 } 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57139 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57139 ']' 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57139 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.012 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57139 00:04:20.013 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.013 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.013 killing process with pid 57139 00:04:20.013 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57139' 00:04:20.013 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57139 00:04:20.013 08:41:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57139 00:04:22.552 08:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57188 00:04:22.552 08:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.552 08:42:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57188 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57188 ']' 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57188 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57188 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.831 killing process with pid 57188 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57188' 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57188 00:04:27.831 08:42:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57188 00:04:29.736 08:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.736 08:42:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:29.736 00:04:29.736 real 0m11.316s 00:04:29.736 user 0m10.684s 00:04:29.736 sys 0m0.904s 00:04:29.736 08:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.736 08:42:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.736 ************************************ 00:04:29.736 END TEST skip_rpc_with_json 00:04:29.736 ************************************ 00:04:29.996 08:42:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.996 08:42:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.996 08:42:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.996 08:42:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.996 ************************************ 00:04:29.996 START TEST skip_rpc_with_delay 00:04:29.996 ************************************ 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.996 [2024-09-28 08:42:07.883331] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.996 [2024-09-28 08:42:07.883445] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.996 00:04:29.996 real 0m0.181s 00:04:29.996 user 0m0.090s 00:04:29.996 sys 0m0.090s 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.996 08:42:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.996 ************************************ 00:04:29.996 END TEST skip_rpc_with_delay 00:04:29.996 ************************************ 00:04:30.257 08:42:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:30.257 08:42:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:30.257 08:42:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:30.257 08:42:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.257 08:42:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.257 08:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.257 ************************************ 00:04:30.257 START TEST exit_on_failed_rpc_init 00:04:30.257 ************************************ 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57327 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57327 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57327 ']' 00:04:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.257 08:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.257 [2024-09-28 08:42:08.131664] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:30.257 [2024-09-28 08:42:08.131807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57327 ] 00:04:30.517 [2024-09-28 08:42:08.300981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.517 [2024-09-28 08:42:08.483886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.457 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.457 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:31.457 08:42:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.457 08:42:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.457 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.458 08:42:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:31.458 [2024-09-28 08:42:09.413103] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:31.458 [2024-09-28 08:42:09.413338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57345 ] 00:04:31.718 [2024-09-28 08:42:09.579962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.980 [2024-09-28 08:42:09.829114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.980 [2024-09-28 08:42:09.829303] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:31.980 [2024-09-28 08:42:09.829408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:31.980 [2024-09-28 08:42:09.829448] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:32.548 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:32.548 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.548 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57327 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57327 ']' 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57327 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57327 00:04:32.549 killing process with pid 57327 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57327' 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57327 00:04:32.549 08:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57327 00:04:35.198 ************************************ 00:04:35.198 END TEST exit_on_failed_rpc_init 00:04:35.198 ************************************ 00:04:35.198 00:04:35.198 real 0m4.684s 00:04:35.198 user 0m5.263s 00:04:35.198 sys 0m0.624s 00:04:35.198 08:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.198 08:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.198 08:42:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.198 ************************************ 00:04:35.198 END TEST skip_rpc 00:04:35.198 ************************************ 00:04:35.198 00:04:35.198 real 0m24.140s 00:04:35.198 user 0m23.215s 00:04:35.198 sys 0m2.348s 00:04:35.198 08:42:12 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.198 08:42:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.198 08:42:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.198 08:42:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.198 08:42:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.198 08:42:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.198 ************************************ 00:04:35.198 START TEST rpc_client 00:04:35.198 ************************************ 00:04:35.198 08:42:12 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.198 * Looking for test storage... 00:04:35.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:35.198 08:42:12 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.198 08:42:12 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.198 08:42:12 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.198 08:42:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.198 --rc genhtml_branch_coverage=1 00:04:35.198 --rc genhtml_function_coverage=1 00:04:35.198 --rc genhtml_legend=1 00:04:35.198 --rc geninfo_all_blocks=1 00:04:35.198 --rc geninfo_unexecuted_blocks=1 00:04:35.198 00:04:35.198 ' 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.198 --rc genhtml_branch_coverage=1 00:04:35.198 --rc genhtml_function_coverage=1 00:04:35.198 --rc genhtml_legend=1 00:04:35.198 --rc geninfo_all_blocks=1 00:04:35.198 --rc geninfo_unexecuted_blocks=1 00:04:35.198 00:04:35.198 ' 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.198 --rc genhtml_branch_coverage=1 00:04:35.198 --rc genhtml_function_coverage=1 00:04:35.198 --rc genhtml_legend=1 00:04:35.198 --rc geninfo_all_blocks=1 00:04:35.198 --rc geninfo_unexecuted_blocks=1 00:04:35.198 00:04:35.198 ' 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.198 --rc genhtml_branch_coverage=1 00:04:35.198 --rc genhtml_function_coverage=1 00:04:35.198 --rc genhtml_legend=1 00:04:35.198 --rc geninfo_all_blocks=1 00:04:35.198 --rc geninfo_unexecuted_blocks=1 00:04:35.198 00:04:35.198 ' 00:04:35.198 08:42:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:35.198 OK 00:04:35.198 08:42:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:35.198 00:04:35.198 real 0m0.311s 00:04:35.198 user 0m0.140s 00:04:35.198 sys 0m0.186s 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.198 08:42:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:35.198 ************************************ 00:04:35.198 END TEST rpc_client 00:04:35.198 ************************************ 00:04:35.198 08:42:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.198 08:42:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.198 08:42:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.198 08:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.459 ************************************ 00:04:35.459 START TEST json_config 00:04:35.459 ************************************ 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.459 08:42:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.459 08:42:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.459 08:42:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.459 08:42:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.459 08:42:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.459 08:42:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:35.459 08:42:13 json_config -- scripts/common.sh@345 -- # : 1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.459 08:42:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.459 08:42:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@353 -- # local d=1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.459 08:42:13 json_config -- scripts/common.sh@355 -- # echo 1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.459 08:42:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@353 -- # local d=2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.459 08:42:13 json_config -- scripts/common.sh@355 -- # echo 2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.459 08:42:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.459 08:42:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.459 08:42:13 json_config -- scripts/common.sh@368 -- # return 0 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.459 --rc genhtml_branch_coverage=1 00:04:35.459 --rc genhtml_function_coverage=1 00:04:35.459 --rc genhtml_legend=1 00:04:35.459 --rc geninfo_all_blocks=1 00:04:35.459 --rc geninfo_unexecuted_blocks=1 00:04:35.459 00:04:35.459 ' 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.459 --rc genhtml_branch_coverage=1 00:04:35.459 --rc genhtml_function_coverage=1 00:04:35.459 --rc genhtml_legend=1 00:04:35.459 --rc geninfo_all_blocks=1 00:04:35.459 --rc geninfo_unexecuted_blocks=1 00:04:35.459 00:04:35.459 ' 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.459 --rc genhtml_branch_coverage=1 00:04:35.459 --rc genhtml_function_coverage=1 00:04:35.459 --rc genhtml_legend=1 00:04:35.459 --rc geninfo_all_blocks=1 00:04:35.459 --rc geninfo_unexecuted_blocks=1 00:04:35.459 00:04:35.459 ' 00:04:35.459 08:42:13 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.459 --rc genhtml_branch_coverage=1 00:04:35.459 --rc genhtml_function_coverage=1 00:04:35.459 --rc genhtml_legend=1 00:04:35.459 --rc geninfo_all_blocks=1 00:04:35.459 --rc geninfo_unexecuted_blocks=1 00:04:35.459 00:04:35.459 ' 00:04:35.459 08:42:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b450651-4bf1-412f-b307-e5438f919ee2 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7b450651-4bf1-412f-b307-e5438f919ee2 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.459 08:42:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.459 08:42:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.459 08:42:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.459 08:42:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.459 08:42:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.459 08:42:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.459 08:42:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.459 08:42:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.459 08:42:13 json_config -- paths/export.sh@5 -- # export PATH 00:04:35.460 08:42:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@51 -- # : 0 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.460 08:42:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:35.460 WARNING: No tests are enabled so not running JSON configuration tests 00:04:35.460 08:42:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:35.460 00:04:35.460 real 0m0.235s 00:04:35.460 user 0m0.137s 00:04:35.460 sys 0m0.103s 00:04:35.460 08:42:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.460 08:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.460 ************************************ 00:04:35.460 END TEST json_config 00:04:35.460 ************************************ 00:04:35.720 08:42:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.720 08:42:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.720 08:42:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.720 08:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.720 ************************************ 00:04:35.720 START TEST json_config_extra_key 00:04:35.720 ************************************ 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.720 08:42:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:35.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.720 --rc genhtml_branch_coverage=1 00:04:35.720 --rc genhtml_function_coverage=1 00:04:35.720 --rc genhtml_legend=1 00:04:35.720 --rc geninfo_all_blocks=1 00:04:35.720 --rc geninfo_unexecuted_blocks=1 00:04:35.720 00:04:35.720 ' 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:35.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.720 --rc genhtml_branch_coverage=1 00:04:35.720 --rc genhtml_function_coverage=1 00:04:35.720 --rc genhtml_legend=1 00:04:35.720 --rc geninfo_all_blocks=1 00:04:35.720 --rc geninfo_unexecuted_blocks=1 00:04:35.720 00:04:35.720 ' 00:04:35.720 08:42:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:35.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.720 --rc genhtml_branch_coverage=1 00:04:35.720 --rc genhtml_function_coverage=1 00:04:35.720 --rc genhtml_legend=1 00:04:35.720 --rc geninfo_all_blocks=1 00:04:35.721 --rc geninfo_unexecuted_blocks=1 00:04:35.721 00:04:35.721 ' 00:04:35.721 08:42:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.721 --rc genhtml_branch_coverage=1 00:04:35.721 --rc genhtml_function_coverage=1 00:04:35.721 --rc genhtml_legend=1 00:04:35.721 --rc geninfo_all_blocks=1 00:04:35.721 --rc geninfo_unexecuted_blocks=1 00:04:35.721 00:04:35.721 ' 00:04:35.721 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7b450651-4bf1-412f-b307-e5438f919ee2 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7b450651-4bf1-412f-b307-e5438f919ee2 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.721 08:42:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.721 08:42:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.981 08:42:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.981 08:42:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.981 08:42:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.981 08:42:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.981 08:42:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.981 08:42:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.981 08:42:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.981 08:42:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.981 08:42:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.981 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:35.981 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.981 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.981 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.982 INFO: launching applications... 00:04:35.982 08:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57555 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.982 Waiting for target to run... 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57555 /var/tmp/spdk_tgt.sock 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57555 ']' 00:04:35.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.982 08:42:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.982 08:42:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.982 [2024-09-28 08:42:13.843294] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:35.982 [2024-09-28 08:42:13.843419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57555 ] 00:04:36.552 [2024-09-28 08:42:14.392748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.810 [2024-09-28 08:42:14.577076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.378 00:04:37.378 INFO: shutting down applications... 00:04:37.378 08:42:15 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.378 08:42:15 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.378 08:42:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.378 08:42:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57555 ]] 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57555 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:37.378 08:42:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.945 08:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.945 08:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.945 08:42:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:37.945 08:42:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.514 08:42:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.514 08:42:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.514 08:42:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:38.514 08:42:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.773 08:42:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.773 08:42:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.773 08:42:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:38.773 08:42:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.342 08:42:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.342 08:42:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.342 08:42:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:39.342 08:42:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.910 08:42:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.910 08:42:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.910 08:42:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:39.910 08:42:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57555 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.479 SPDK target shutdown done 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.479 08:42:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.479 08:42:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.479 Success 00:04:40.479 00:04:40.479 real 0m4.780s 00:04:40.479 user 0m3.978s 00:04:40.479 sys 0m0.733s 00:04:40.479 08:42:18 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.479 ************************************ 00:04:40.479 END TEST json_config_extra_key 00:04:40.479 ************************************ 00:04:40.479 08:42:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.479 08:42:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.479 08:42:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.479 08:42:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.479 08:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:40.479 ************************************ 00:04:40.479 START TEST alias_rpc 00:04:40.479 ************************************ 00:04:40.479 08:42:18 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.479 * Looking for test storage... 00:04:40.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.739 08:42:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:40.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.739 --rc genhtml_branch_coverage=1 00:04:40.739 --rc genhtml_function_coverage=1 00:04:40.739 --rc genhtml_legend=1 00:04:40.739 --rc geninfo_all_blocks=1 00:04:40.739 --rc geninfo_unexecuted_blocks=1 00:04:40.739 00:04:40.739 ' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:40.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.739 --rc genhtml_branch_coverage=1 00:04:40.739 --rc genhtml_function_coverage=1 00:04:40.739 --rc genhtml_legend=1 00:04:40.739 --rc geninfo_all_blocks=1 00:04:40.739 --rc geninfo_unexecuted_blocks=1 00:04:40.739 00:04:40.739 ' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:40.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.739 --rc genhtml_branch_coverage=1 00:04:40.739 --rc genhtml_function_coverage=1 00:04:40.739 --rc genhtml_legend=1 00:04:40.739 --rc geninfo_all_blocks=1 00:04:40.739 --rc geninfo_unexecuted_blocks=1 00:04:40.739 00:04:40.739 ' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:40.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.739 --rc genhtml_branch_coverage=1 00:04:40.739 --rc genhtml_function_coverage=1 00:04:40.739 --rc genhtml_legend=1 00:04:40.739 --rc geninfo_all_blocks=1 00:04:40.739 --rc geninfo_unexecuted_blocks=1 00:04:40.739 00:04:40.739 ' 00:04:40.739 08:42:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.739 08:42:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57672 00:04:40.739 08:42:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.739 08:42:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57672 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57672 ']' 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.739 08:42:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.740 08:42:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.740 08:42:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.740 [2024-09-28 08:42:18.667377] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:40.740 [2024-09-28 08:42:18.667520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57672 ] 00:04:40.999 [2024-09-28 08:42:18.837063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.258 [2024-09-28 08:42:19.032927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.197 08:42:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.197 08:42:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:42.197 08:42:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:42.197 08:42:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57672 00:04:42.197 08:42:20 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57672 ']' 00:04:42.197 08:42:20 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57672 00:04:42.197 08:42:20 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.197 08:42:20 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.197 08:42:20 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57672 00:04:42.197 killing process with pid 57672 00:04:42.198 08:42:20 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.198 08:42:20 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.198 08:42:20 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57672' 00:04:42.198 08:42:20 alias_rpc -- common/autotest_common.sh@969 -- # kill 57672 00:04:42.198 08:42:20 alias_rpc -- common/autotest_common.sh@974 -- # wait 57672 00:04:44.739 ************************************ 00:04:44.739 END TEST alias_rpc 00:04:44.739 ************************************ 00:04:44.739 00:04:44.739 real 0m4.232s 00:04:44.739 user 0m4.186s 00:04:44.739 sys 0m0.587s 00:04:44.739 08:42:22 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.739 08:42:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.739 08:42:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:44.739 08:42:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:44.739 08:42:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.739 08:42:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.739 08:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.739 ************************************ 00:04:44.739 START TEST spdkcli_tcp 00:04:44.739 ************************************ 00:04:44.739 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:45.000 * Looking for test storage... 00:04:45.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.000 08:42:22 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.000 --rc genhtml_branch_coverage=1 00:04:45.000 --rc genhtml_function_coverage=1 00:04:45.000 --rc genhtml_legend=1 00:04:45.000 --rc geninfo_all_blocks=1 00:04:45.000 --rc geninfo_unexecuted_blocks=1 00:04:45.000 00:04:45.000 ' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.000 --rc genhtml_branch_coverage=1 00:04:45.000 --rc genhtml_function_coverage=1 00:04:45.000 --rc genhtml_legend=1 00:04:45.000 --rc geninfo_all_blocks=1 00:04:45.000 --rc geninfo_unexecuted_blocks=1 00:04:45.000 00:04:45.000 ' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.000 --rc genhtml_branch_coverage=1 00:04:45.000 --rc genhtml_function_coverage=1 00:04:45.000 --rc genhtml_legend=1 00:04:45.000 --rc geninfo_all_blocks=1 00:04:45.000 --rc geninfo_unexecuted_blocks=1 00:04:45.000 00:04:45.000 ' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:45.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.000 --rc genhtml_branch_coverage=1 00:04:45.000 --rc genhtml_function_coverage=1 00:04:45.000 --rc genhtml_legend=1 00:04:45.000 --rc geninfo_all_blocks=1 00:04:45.000 --rc geninfo_unexecuted_blocks=1 00:04:45.000 00:04:45.000 ' 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57779 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:45.000 08:42:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57779 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57779 ']' 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.000 08:42:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.261 [2024-09-28 08:42:23.000339] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:45.261 [2024-09-28 08:42:23.000599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57779 ] 00:04:45.261 [2024-09-28 08:42:23.170416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.522 [2024-09-28 08:42:23.422207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.522 [2024-09-28 08:42:23.422252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.462 08:42:24 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.462 08:42:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:46.462 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57802 00:04:46.462 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:46.462 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:46.722 [ 00:04:46.722 "bdev_malloc_delete", 00:04:46.722 "bdev_malloc_create", 00:04:46.722 "bdev_null_resize", 00:04:46.722 "bdev_null_delete", 00:04:46.722 "bdev_null_create", 00:04:46.722 "bdev_nvme_cuse_unregister", 00:04:46.722 "bdev_nvme_cuse_register", 00:04:46.722 "bdev_opal_new_user", 00:04:46.722 "bdev_opal_set_lock_state", 00:04:46.722 "bdev_opal_delete", 00:04:46.722 "bdev_opal_get_info", 00:04:46.722 "bdev_opal_create", 00:04:46.722 "bdev_nvme_opal_revert", 00:04:46.722 "bdev_nvme_opal_init", 00:04:46.722 "bdev_nvme_send_cmd", 00:04:46.722 "bdev_nvme_set_keys", 00:04:46.722 "bdev_nvme_get_path_iostat", 00:04:46.722 "bdev_nvme_get_mdns_discovery_info", 00:04:46.722 "bdev_nvme_stop_mdns_discovery", 00:04:46.722 "bdev_nvme_start_mdns_discovery", 00:04:46.722 "bdev_nvme_set_multipath_policy", 00:04:46.722 "bdev_nvme_set_preferred_path", 00:04:46.722 "bdev_nvme_get_io_paths", 00:04:46.722 "bdev_nvme_remove_error_injection", 00:04:46.722 "bdev_nvme_add_error_injection", 00:04:46.722 "bdev_nvme_get_discovery_info", 00:04:46.722 "bdev_nvme_stop_discovery", 00:04:46.722 "bdev_nvme_start_discovery", 00:04:46.722 "bdev_nvme_get_controller_health_info", 00:04:46.722 "bdev_nvme_disable_controller", 00:04:46.722 "bdev_nvme_enable_controller", 00:04:46.722 "bdev_nvme_reset_controller", 00:04:46.722 "bdev_nvme_get_transport_statistics", 00:04:46.722 "bdev_nvme_apply_firmware", 00:04:46.722 "bdev_nvme_detach_controller", 00:04:46.722 "bdev_nvme_get_controllers", 00:04:46.722 "bdev_nvme_attach_controller", 00:04:46.722 "bdev_nvme_set_hotplug", 00:04:46.722 "bdev_nvme_set_options", 00:04:46.722 "bdev_passthru_delete", 00:04:46.722 "bdev_passthru_create", 00:04:46.722 "bdev_lvol_set_parent_bdev", 00:04:46.722 "bdev_lvol_set_parent", 00:04:46.722 "bdev_lvol_check_shallow_copy", 00:04:46.722 "bdev_lvol_start_shallow_copy", 00:04:46.722 "bdev_lvol_grow_lvstore", 00:04:46.722 "bdev_lvol_get_lvols", 00:04:46.722 "bdev_lvol_get_lvstores", 00:04:46.722 "bdev_lvol_delete", 00:04:46.722 "bdev_lvol_set_read_only", 00:04:46.722 "bdev_lvol_resize", 00:04:46.722 "bdev_lvol_decouple_parent", 00:04:46.722 "bdev_lvol_inflate", 00:04:46.722 "bdev_lvol_rename", 00:04:46.722 "bdev_lvol_clone_bdev", 00:04:46.722 "bdev_lvol_clone", 00:04:46.722 "bdev_lvol_snapshot", 00:04:46.722 "bdev_lvol_create", 00:04:46.722 "bdev_lvol_delete_lvstore", 00:04:46.722 "bdev_lvol_rename_lvstore", 00:04:46.722 "bdev_lvol_create_lvstore", 00:04:46.722 "bdev_raid_set_options", 00:04:46.722 "bdev_raid_remove_base_bdev", 00:04:46.722 "bdev_raid_add_base_bdev", 00:04:46.722 "bdev_raid_delete", 00:04:46.722 "bdev_raid_create", 00:04:46.722 "bdev_raid_get_bdevs", 00:04:46.722 "bdev_error_inject_error", 00:04:46.722 "bdev_error_delete", 00:04:46.722 "bdev_error_create", 00:04:46.722 "bdev_split_delete", 00:04:46.722 "bdev_split_create", 00:04:46.722 "bdev_delay_delete", 00:04:46.722 "bdev_delay_create", 00:04:46.722 "bdev_delay_update_latency", 00:04:46.722 "bdev_zone_block_delete", 00:04:46.722 "bdev_zone_block_create", 00:04:46.722 "blobfs_create", 00:04:46.722 "blobfs_detect", 00:04:46.722 "blobfs_set_cache_size", 00:04:46.722 "bdev_aio_delete", 00:04:46.722 "bdev_aio_rescan", 00:04:46.722 "bdev_aio_create", 00:04:46.722 "bdev_ftl_set_property", 00:04:46.722 "bdev_ftl_get_properties", 00:04:46.722 "bdev_ftl_get_stats", 00:04:46.722 "bdev_ftl_unmap", 00:04:46.722 "bdev_ftl_unload", 00:04:46.722 "bdev_ftl_delete", 00:04:46.722 "bdev_ftl_load", 00:04:46.722 "bdev_ftl_create", 00:04:46.722 "bdev_virtio_attach_controller", 00:04:46.722 "bdev_virtio_scsi_get_devices", 00:04:46.722 "bdev_virtio_detach_controller", 00:04:46.722 "bdev_virtio_blk_set_hotplug", 00:04:46.722 "bdev_iscsi_delete", 00:04:46.723 "bdev_iscsi_create", 00:04:46.723 "bdev_iscsi_set_options", 00:04:46.723 "accel_error_inject_error", 00:04:46.723 "ioat_scan_accel_module", 00:04:46.723 "dsa_scan_accel_module", 00:04:46.723 "iaa_scan_accel_module", 00:04:46.723 "keyring_file_remove_key", 00:04:46.723 "keyring_file_add_key", 00:04:46.723 "keyring_linux_set_options", 00:04:46.723 "fsdev_aio_delete", 00:04:46.723 "fsdev_aio_create", 00:04:46.723 "iscsi_get_histogram", 00:04:46.723 "iscsi_enable_histogram", 00:04:46.723 "iscsi_set_options", 00:04:46.723 "iscsi_get_auth_groups", 00:04:46.723 "iscsi_auth_group_remove_secret", 00:04:46.723 "iscsi_auth_group_add_secret", 00:04:46.723 "iscsi_delete_auth_group", 00:04:46.723 "iscsi_create_auth_group", 00:04:46.723 "iscsi_set_discovery_auth", 00:04:46.723 "iscsi_get_options", 00:04:46.723 "iscsi_target_node_request_logout", 00:04:46.723 "iscsi_target_node_set_redirect", 00:04:46.723 "iscsi_target_node_set_auth", 00:04:46.723 "iscsi_target_node_add_lun", 00:04:46.723 "iscsi_get_stats", 00:04:46.723 "iscsi_get_connections", 00:04:46.723 "iscsi_portal_group_set_auth", 00:04:46.723 "iscsi_start_portal_group", 00:04:46.723 "iscsi_delete_portal_group", 00:04:46.723 "iscsi_create_portal_group", 00:04:46.723 "iscsi_get_portal_groups", 00:04:46.723 "iscsi_delete_target_node", 00:04:46.723 "iscsi_target_node_remove_pg_ig_maps", 00:04:46.723 "iscsi_target_node_add_pg_ig_maps", 00:04:46.723 "iscsi_create_target_node", 00:04:46.723 "iscsi_get_target_nodes", 00:04:46.723 "iscsi_delete_initiator_group", 00:04:46.723 "iscsi_initiator_group_remove_initiators", 00:04:46.723 "iscsi_initiator_group_add_initiators", 00:04:46.723 "iscsi_create_initiator_group", 00:04:46.723 "iscsi_get_initiator_groups", 00:04:46.723 "nvmf_set_crdt", 00:04:46.723 "nvmf_set_config", 00:04:46.723 "nvmf_set_max_subsystems", 00:04:46.723 "nvmf_stop_mdns_prr", 00:04:46.723 "nvmf_publish_mdns_prr", 00:04:46.723 "nvmf_subsystem_get_listeners", 00:04:46.723 "nvmf_subsystem_get_qpairs", 00:04:46.723 "nvmf_subsystem_get_controllers", 00:04:46.723 "nvmf_get_stats", 00:04:46.723 "nvmf_get_transports", 00:04:46.723 "nvmf_create_transport", 00:04:46.723 "nvmf_get_targets", 00:04:46.723 "nvmf_delete_target", 00:04:46.723 "nvmf_create_target", 00:04:46.723 "nvmf_subsystem_allow_any_host", 00:04:46.723 "nvmf_subsystem_set_keys", 00:04:46.723 "nvmf_subsystem_remove_host", 00:04:46.723 "nvmf_subsystem_add_host", 00:04:46.723 "nvmf_ns_remove_host", 00:04:46.723 "nvmf_ns_add_host", 00:04:46.723 "nvmf_subsystem_remove_ns", 00:04:46.723 "nvmf_subsystem_set_ns_ana_group", 00:04:46.723 "nvmf_subsystem_add_ns", 00:04:46.723 "nvmf_subsystem_listener_set_ana_state", 00:04:46.723 "nvmf_discovery_get_referrals", 00:04:46.723 "nvmf_discovery_remove_referral", 00:04:46.723 "nvmf_discovery_add_referral", 00:04:46.723 "nvmf_subsystem_remove_listener", 00:04:46.723 "nvmf_subsystem_add_listener", 00:04:46.723 "nvmf_delete_subsystem", 00:04:46.723 "nvmf_create_subsystem", 00:04:46.723 "nvmf_get_subsystems", 00:04:46.723 "env_dpdk_get_mem_stats", 00:04:46.723 "nbd_get_disks", 00:04:46.723 "nbd_stop_disk", 00:04:46.723 "nbd_start_disk", 00:04:46.723 "ublk_recover_disk", 00:04:46.723 "ublk_get_disks", 00:04:46.723 "ublk_stop_disk", 00:04:46.723 "ublk_start_disk", 00:04:46.723 "ublk_destroy_target", 00:04:46.723 "ublk_create_target", 00:04:46.723 "virtio_blk_create_transport", 00:04:46.723 "virtio_blk_get_transports", 00:04:46.723 "vhost_controller_set_coalescing", 00:04:46.723 "vhost_get_controllers", 00:04:46.723 "vhost_delete_controller", 00:04:46.723 "vhost_create_blk_controller", 00:04:46.723 "vhost_scsi_controller_remove_target", 00:04:46.723 "vhost_scsi_controller_add_target", 00:04:46.723 "vhost_start_scsi_controller", 00:04:46.723 "vhost_create_scsi_controller", 00:04:46.723 "thread_set_cpumask", 00:04:46.723 "scheduler_set_options", 00:04:46.723 "framework_get_governor", 00:04:46.723 "framework_get_scheduler", 00:04:46.723 "framework_set_scheduler", 00:04:46.723 "framework_get_reactors", 00:04:46.723 "thread_get_io_channels", 00:04:46.723 "thread_get_pollers", 00:04:46.723 "thread_get_stats", 00:04:46.723 "framework_monitor_context_switch", 00:04:46.723 "spdk_kill_instance", 00:04:46.723 "log_enable_timestamps", 00:04:46.723 "log_get_flags", 00:04:46.723 "log_clear_flag", 00:04:46.723 "log_set_flag", 00:04:46.723 "log_get_level", 00:04:46.723 "log_set_level", 00:04:46.723 "log_get_print_level", 00:04:46.723 "log_set_print_level", 00:04:46.723 "framework_enable_cpumask_locks", 00:04:46.723 "framework_disable_cpumask_locks", 00:04:46.723 "framework_wait_init", 00:04:46.723 "framework_start_init", 00:04:46.723 "scsi_get_devices", 00:04:46.723 "bdev_get_histogram", 00:04:46.723 "bdev_enable_histogram", 00:04:46.723 "bdev_set_qos_limit", 00:04:46.723 "bdev_set_qd_sampling_period", 00:04:46.723 "bdev_get_bdevs", 00:04:46.723 "bdev_reset_iostat", 00:04:46.723 "bdev_get_iostat", 00:04:46.723 "bdev_examine", 00:04:46.723 "bdev_wait_for_examine", 00:04:46.723 "bdev_set_options", 00:04:46.723 "accel_get_stats", 00:04:46.723 "accel_set_options", 00:04:46.723 "accel_set_driver", 00:04:46.723 "accel_crypto_key_destroy", 00:04:46.723 "accel_crypto_keys_get", 00:04:46.723 "accel_crypto_key_create", 00:04:46.723 "accel_assign_opc", 00:04:46.723 "accel_get_module_info", 00:04:46.723 "accel_get_opc_assignments", 00:04:46.723 "vmd_rescan", 00:04:46.723 "vmd_remove_device", 00:04:46.723 "vmd_enable", 00:04:46.723 "sock_get_default_impl", 00:04:46.723 "sock_set_default_impl", 00:04:46.723 "sock_impl_set_options", 00:04:46.723 "sock_impl_get_options", 00:04:46.723 "iobuf_get_stats", 00:04:46.723 "iobuf_set_options", 00:04:46.723 "keyring_get_keys", 00:04:46.723 "framework_get_pci_devices", 00:04:46.723 "framework_get_config", 00:04:46.723 "framework_get_subsystems", 00:04:46.723 "fsdev_set_opts", 00:04:46.723 "fsdev_get_opts", 00:04:46.723 "trace_get_info", 00:04:46.723 "trace_get_tpoint_group_mask", 00:04:46.723 "trace_disable_tpoint_group", 00:04:46.723 "trace_enable_tpoint_group", 00:04:46.723 "trace_clear_tpoint_mask", 00:04:46.723 "trace_set_tpoint_mask", 00:04:46.723 "notify_get_notifications", 00:04:46.723 "notify_get_types", 00:04:46.723 "spdk_get_version", 00:04:46.723 "rpc_get_methods" 00:04:46.723 ] 00:04:46.723 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.723 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:46.723 08:42:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57779 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57779 ']' 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57779 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57779 00:04:46.723 killing process with pid 57779 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57779' 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57779 00:04:46.723 08:42:24 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57779 00:04:50.015 ************************************ 00:04:50.015 END TEST spdkcli_tcp 00:04:50.015 ************************************ 00:04:50.015 00:04:50.015 real 0m4.735s 00:04:50.015 user 0m7.995s 00:04:50.015 sys 0m0.826s 00:04:50.015 08:42:27 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.015 08:42:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.015 08:42:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.015 08:42:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.015 08:42:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.015 08:42:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.015 ************************************ 00:04:50.015 START TEST dpdk_mem_utility 00:04:50.015 ************************************ 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.015 * Looking for test storage... 00:04:50.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.015 08:42:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.015 --rc genhtml_branch_coverage=1 00:04:50.015 --rc genhtml_function_coverage=1 00:04:50.015 --rc genhtml_legend=1 00:04:50.015 --rc geninfo_all_blocks=1 00:04:50.015 --rc geninfo_unexecuted_blocks=1 00:04:50.015 00:04:50.015 ' 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.015 --rc genhtml_branch_coverage=1 00:04:50.015 --rc genhtml_function_coverage=1 00:04:50.015 --rc genhtml_legend=1 00:04:50.015 --rc geninfo_all_blocks=1 00:04:50.015 --rc geninfo_unexecuted_blocks=1 00:04:50.015 00:04:50.015 ' 00:04:50.015 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:50.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.016 --rc genhtml_branch_coverage=1 00:04:50.016 --rc genhtml_function_coverage=1 00:04:50.016 --rc genhtml_legend=1 00:04:50.016 --rc geninfo_all_blocks=1 00:04:50.016 --rc geninfo_unexecuted_blocks=1 00:04:50.016 00:04:50.016 ' 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:50.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.016 --rc genhtml_branch_coverage=1 00:04:50.016 --rc genhtml_function_coverage=1 00:04:50.016 --rc genhtml_legend=1 00:04:50.016 --rc geninfo_all_blocks=1 00:04:50.016 --rc geninfo_unexecuted_blocks=1 00:04:50.016 00:04:50.016 ' 00:04:50.016 08:42:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.016 08:42:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57907 00:04:50.016 08:42:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.016 08:42:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57907 00:04:50.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57907 ']' 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.016 08:42:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.016 [2024-09-28 08:42:27.790398] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:50.016 [2024-09-28 08:42:27.790683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57907 ] 00:04:50.016 [2024-09-28 08:42:27.958254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.275 [2024-09-28 08:42:28.208387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.213 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.213 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:51.213 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.213 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.213 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.213 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.213 { 00:04:51.213 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.213 } 00:04:51.213 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.213 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.474 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:51.474 1 heaps totaling size 866.000000 MiB 00:04:51.474 size: 866.000000 MiB heap id: 0 00:04:51.474 end heaps---------- 00:04:51.474 9 mempools totaling size 642.649841 MiB 00:04:51.474 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.474 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.474 size: 92.545471 MiB name: bdev_io_57907 00:04:51.474 size: 51.011292 MiB name: evtpool_57907 00:04:51.474 size: 50.003479 MiB name: msgpool_57907 00:04:51.474 size: 36.509338 MiB name: fsdev_io_57907 00:04:51.474 size: 21.763794 MiB name: PDU_Pool 00:04:51.474 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.474 size: 0.026123 MiB name: Session_Pool 00:04:51.474 end mempools------- 00:04:51.474 6 memzones totaling size 4.142822 MiB 00:04:51.474 size: 1.000366 MiB name: RG_ring_0_57907 00:04:51.474 size: 1.000366 MiB name: RG_ring_1_57907 00:04:51.474 size: 1.000366 MiB name: RG_ring_4_57907 00:04:51.474 size: 1.000366 MiB name: RG_ring_5_57907 00:04:51.474 size: 0.125366 MiB name: RG_ring_2_57907 00:04:51.474 size: 0.015991 MiB name: RG_ring_3_57907 00:04:51.474 end memzones------- 00:04:51.474 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.474 heap id: 0 total size: 866.000000 MiB number of busy elements: 315 number of free elements: 19 00:04:51.474 list of free elements. size: 19.913574 MiB 00:04:51.474 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:51.474 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:51.474 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:51.474 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:51.474 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:51.474 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:51.474 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:51.474 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:51.474 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:51.474 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:51.474 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:51.474 element at address: 0x200000200000 with size: 0.832153 MiB 00:04:51.474 element at address: 0x20001de00000 with size: 0.562195 MiB 00:04:51.474 element at address: 0x200003e00000 with size: 0.490417 MiB 00:04:51.474 element at address: 0x20001c000000 with size: 0.488220 MiB 00:04:51.474 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:51.474 element at address: 0x200015e00000 with size: 0.443237 MiB 00:04:51.474 element at address: 0x20002b200000 with size: 0.390442 MiB 00:04:51.474 element at address: 0x200003a00000 with size: 0.352844 MiB 00:04:51.474 list of standard malloc elements. size: 199.287720 MiB 00:04:51.474 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:51.474 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:51.474 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:51.474 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:51.474 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:51.474 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:51.474 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:51.474 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:51.474 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:51.474 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:51.474 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:51.474 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:51.474 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003aff700 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:04:51.474 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:51.474 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71780 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71880 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:51.475 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b264040 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:51.475 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:51.476 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:51.476 list of memzone associated elements. size: 646.798706 MiB 00:04:51.476 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:51.476 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.476 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:51.476 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.476 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:51.476 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57907_0 00:04:51.476 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:51.476 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57907_0 00:04:51.476 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:51.476 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57907_0 00:04:51.476 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:51.476 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57907_0 00:04:51.476 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:51.476 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.476 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:51.476 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.476 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:51.476 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57907 00:04:51.476 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:51.476 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57907 00:04:51.476 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:51.476 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57907 00:04:51.476 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:51.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.476 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:51.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.476 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:51.476 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.476 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:51.476 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.476 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:51.476 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57907 00:04:51.476 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:51.476 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57907 00:04:51.476 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:51.476 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57907 00:04:51.476 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:51.476 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57907 00:04:51.476 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:04:51.476 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57907 00:04:51.476 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:04:51.476 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57907 00:04:51.476 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:51.476 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.476 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:51.476 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.476 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:51.476 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.476 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:04:51.476 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57907 00:04:51.476 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:51.476 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.476 element at address: 0x20002b264140 with size: 0.023804 MiB 00:04:51.476 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.476 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:04:51.476 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57907 00:04:51.476 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:04:51.476 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.476 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:04:51.476 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57907 00:04:51.476 element at address: 0x200003aff800 with size: 0.000366 MiB 00:04:51.476 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57907 00:04:51.476 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:51.476 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57907 00:04:51.476 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:04:51.476 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.476 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.477 08:42:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57907 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57907 ']' 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57907 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57907 00:04:51.477 killing process with pid 57907 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57907' 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57907 00:04:51.477 08:42:29 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57907 00:04:54.778 ************************************ 00:04:54.778 END TEST dpdk_mem_utility 00:04:54.778 ************************************ 00:04:54.778 00:04:54.778 real 0m4.582s 00:04:54.778 user 0m4.320s 00:04:54.778 sys 0m0.742s 00:04:54.778 08:42:32 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.778 08:42:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 08:42:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.778 08:42:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.778 08:42:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.778 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 ************************************ 00:04:54.778 START TEST event 00:04:54.778 ************************************ 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.778 * Looking for test storage... 00:04:54.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.778 08:42:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.778 08:42:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.778 08:42:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.778 08:42:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.778 08:42:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.778 08:42:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.778 08:42:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.778 08:42:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.778 08:42:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.778 08:42:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.778 08:42:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.778 08:42:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:54.778 08:42:32 event -- scripts/common.sh@345 -- # : 1 00:04:54.778 08:42:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.778 08:42:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.778 08:42:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:54.778 08:42:32 event -- scripts/common.sh@353 -- # local d=1 00:04:54.778 08:42:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.778 08:42:32 event -- scripts/common.sh@355 -- # echo 1 00:04:54.778 08:42:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.778 08:42:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:54.778 08:42:32 event -- scripts/common.sh@353 -- # local d=2 00:04:54.778 08:42:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.778 08:42:32 event -- scripts/common.sh@355 -- # echo 2 00:04:54.778 08:42:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.778 08:42:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.778 08:42:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.778 08:42:32 event -- scripts/common.sh@368 -- # return 0 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.778 --rc genhtml_branch_coverage=1 00:04:54.778 --rc genhtml_function_coverage=1 00:04:54.778 --rc genhtml_legend=1 00:04:54.778 --rc geninfo_all_blocks=1 00:04:54.778 --rc geninfo_unexecuted_blocks=1 00:04:54.778 00:04:54.778 ' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.778 --rc genhtml_branch_coverage=1 00:04:54.778 --rc genhtml_function_coverage=1 00:04:54.778 --rc genhtml_legend=1 00:04:54.778 --rc geninfo_all_blocks=1 00:04:54.778 --rc geninfo_unexecuted_blocks=1 00:04:54.778 00:04:54.778 ' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.778 --rc genhtml_branch_coverage=1 00:04:54.778 --rc genhtml_function_coverage=1 00:04:54.778 --rc genhtml_legend=1 00:04:54.778 --rc geninfo_all_blocks=1 00:04:54.778 --rc geninfo_unexecuted_blocks=1 00:04:54.778 00:04:54.778 ' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.778 --rc genhtml_branch_coverage=1 00:04:54.778 --rc genhtml_function_coverage=1 00:04:54.778 --rc genhtml_legend=1 00:04:54.778 --rc geninfo_all_blocks=1 00:04:54.778 --rc geninfo_unexecuted_blocks=1 00:04:54.778 00:04:54.778 ' 00:04:54.778 08:42:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:54.778 08:42:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:54.778 08:42:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:54.778 08:42:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.778 08:42:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.778 ************************************ 00:04:54.778 START TEST event_perf 00:04:54.778 ************************************ 00:04:54.778 08:42:32 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.778 Running I/O for 1 seconds...[2024-09-28 08:42:32.383516] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:54.778 [2024-09-28 08:42:32.384185] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58020 ] 00:04:54.778 [2024-09-28 08:42:32.553592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.052 [2024-09-28 08:42:32.802896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.052 [2024-09-28 08:42:32.803161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.052 [2024-09-28 08:42:32.803082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.052 Running I/O for 1 seconds...[2024-09-28 08:42:32.803195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.433 00:04:56.433 lcore 0: 193807 00:04:56.433 lcore 1: 193806 00:04:56.433 lcore 2: 193805 00:04:56.433 lcore 3: 193806 00:04:56.433 done. 00:04:56.433 ************************************ 00:04:56.433 END TEST event_perf 00:04:56.433 ************************************ 00:04:56.433 00:04:56.433 real 0m1.885s 00:04:56.433 user 0m4.602s 00:04:56.433 sys 0m0.159s 00:04:56.433 08:42:34 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.433 08:42:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.433 08:42:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:56.433 08:42:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:56.433 08:42:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.433 08:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.433 ************************************ 00:04:56.433 START TEST event_reactor 00:04:56.433 ************************************ 00:04:56.433 08:42:34 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:56.433 [2024-09-28 08:42:34.334720] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:56.433 [2024-09-28 08:42:34.335406] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58065 ] 00:04:56.694 [2024-09-28 08:42:34.501449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.954 [2024-09-28 08:42:34.746179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.335 test_start 00:04:58.335 oneshot 00:04:58.335 tick 100 00:04:58.335 tick 100 00:04:58.335 tick 250 00:04:58.335 tick 100 00:04:58.335 tick 100 00:04:58.335 tick 100 00:04:58.335 tick 250 00:04:58.335 tick 500 00:04:58.335 tick 100 00:04:58.335 tick 100 00:04:58.335 tick 250 00:04:58.335 tick 100 00:04:58.335 tick 100 00:04:58.335 test_end 00:04:58.335 00:04:58.335 real 0m1.863s 00:04:58.335 user 0m1.623s 00:04:58.335 sys 0m0.132s 00:04:58.335 08:42:36 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.335 08:42:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.335 ************************************ 00:04:58.335 END TEST event_reactor 00:04:58.335 ************************************ 00:04:58.335 08:42:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.335 08:42:36 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:58.335 08:42:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.335 08:42:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.335 ************************************ 00:04:58.335 START TEST event_reactor_perf 00:04:58.335 ************************************ 00:04:58.335 08:42:36 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.335 [2024-09-28 08:42:36.265054] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.335 [2024-09-28 08:42:36.265171] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58102 ] 00:04:58.595 [2024-09-28 08:42:36.430735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.854 [2024-09-28 08:42:36.676100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.234 test_start 00:05:00.234 test_end 00:05:00.234 Performance: 412722 events per second 00:05:00.234 00:05:00.234 real 0m1.856s 00:05:00.234 user 0m1.602s 00:05:00.234 sys 0m0.146s 00:05:00.234 ************************************ 00:05:00.234 END TEST event_reactor_perf 00:05:00.234 ************************************ 00:05:00.234 08:42:38 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.234 08:42:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.234 08:42:38 event -- event/event.sh@49 -- # uname -s 00:05:00.234 08:42:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.234 08:42:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.234 08:42:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.234 08:42:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.234 08:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.234 ************************************ 00:05:00.234 START TEST event_scheduler 00:05:00.234 ************************************ 00:05:00.234 08:42:38 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:00.494 * Looking for test storage... 00:05:00.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.494 08:42:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.494 --rc genhtml_branch_coverage=1 00:05:00.494 --rc genhtml_function_coverage=1 00:05:00.494 --rc genhtml_legend=1 00:05:00.494 --rc geninfo_all_blocks=1 00:05:00.494 --rc geninfo_unexecuted_blocks=1 00:05:00.494 00:05:00.494 ' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.494 --rc genhtml_branch_coverage=1 00:05:00.494 --rc genhtml_function_coverage=1 00:05:00.494 --rc genhtml_legend=1 00:05:00.494 --rc geninfo_all_blocks=1 00:05:00.494 --rc geninfo_unexecuted_blocks=1 00:05:00.494 00:05:00.494 ' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.494 --rc genhtml_branch_coverage=1 00:05:00.494 --rc genhtml_function_coverage=1 00:05:00.494 --rc genhtml_legend=1 00:05:00.494 --rc geninfo_all_blocks=1 00:05:00.494 --rc geninfo_unexecuted_blocks=1 00:05:00.494 00:05:00.494 ' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.494 --rc genhtml_branch_coverage=1 00:05:00.494 --rc genhtml_function_coverage=1 00:05:00.494 --rc genhtml_legend=1 00:05:00.494 --rc geninfo_all_blocks=1 00:05:00.494 --rc geninfo_unexecuted_blocks=1 00:05:00.494 00:05:00.494 ' 00:05:00.494 08:42:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.494 08:42:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58178 00:05:00.494 08:42:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.494 08:42:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.494 08:42:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58178 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58178 ']' 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.494 08:42:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.494 [2024-09-28 08:42:38.471537] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:00.494 [2024-09-28 08:42:38.471702] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:05:00.754 [2024-09-28 08:42:38.642562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.014 [2024-09-28 08:42:38.892712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.014 [2024-09-28 08:42:38.892857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.014 [2024-09-28 08:42:38.892994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.014 [2024-09-28 08:42:38.893030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:01.583 08:42:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.583 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.583 POWER: Cannot set governor of lcore 0 to performance 00:05:01.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.583 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.583 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.583 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.583 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:01.583 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:01.583 POWER: Unable to set Power Management Environment for lcore 0 00:05:01.583 [2024-09-28 08:42:39.302101] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:01.583 [2024-09-28 08:42:39.302122] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:01.583 [2024-09-28 08:42:39.302133] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:01.583 [2024-09-28 08:42:39.302158] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:01.583 [2024-09-28 08:42:39.302166] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:01.583 [2024-09-28 08:42:39.302176] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.583 08:42:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.583 08:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.842 [2024-09-28 08:42:39.672875] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.842 08:42:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.842 08:42:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.842 08:42:39 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.843 08:42:39 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 ************************************ 00:05:01.843 START TEST scheduler_create_thread 00:05:01.843 ************************************ 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 2 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 3 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 4 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 5 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 6 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 7 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 8 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 9 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 10 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.843 08:42:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.782 08:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.782 08:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.782 08:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.782 08:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.160 08:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.161 08:42:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.161 08:42:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.161 08:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.161 08:42:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.097 ************************************ 00:05:05.097 END TEST scheduler_create_thread 00:05:05.097 ************************************ 00:05:05.097 08:42:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.097 00:05:05.097 real 0m3.374s 00:05:05.097 user 0m0.025s 00:05:05.097 sys 0m0.010s 00:05:05.097 08:42:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.097 08:42:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.356 08:42:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.356 08:42:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58178 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58178 ']' 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58178 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58178 00:05:05.356 killing process with pid 58178 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58178' 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58178 00:05:05.356 08:42:43 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58178 00:05:05.615 [2024-09-28 08:42:43.444516] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.994 00:05:06.994 real 0m6.711s 00:05:06.994 user 0m12.732s 00:05:06.994 sys 0m0.665s 00:05:06.994 ************************************ 00:05:06.994 END TEST event_scheduler 00:05:06.994 ************************************ 00:05:06.994 08:42:44 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.994 08:42:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.995 08:42:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.995 08:42:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.995 08:42:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.995 08:42:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.995 08:42:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.995 ************************************ 00:05:06.995 START TEST app_repeat 00:05:06.995 ************************************ 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58306 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58306' 00:05:06.995 Process app_repeat pid: 58306 00:05:06.995 spdk_app_start Round 0 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.995 08:42:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58306 /var/tmp/spdk-nbd.sock 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.995 08:42:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 [2024-09-28 08:42:44.999039] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:07.254 [2024-09-28 08:42:44.999257] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58306 ] 00:05:07.254 [2024-09-28 08:42:45.167128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.513 [2024-09-28 08:42:45.407638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.513 [2024-09-28 08:42:45.407723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.081 08:42:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.081 08:42:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:08.081 08:42:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.340 Malloc0 00:05:08.340 08:42:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.599 Malloc1 00:05:08.600 08:42:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.600 08:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.859 /dev/nbd0 00:05:08.859 08:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.859 08:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.859 1+0 records in 00:05:08.859 1+0 records out 00:05:08.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355228 s, 11.5 MB/s 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:08.859 08:42:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:08.859 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.859 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.859 08:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.119 /dev/nbd1 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.119 1+0 records in 00:05:09.119 1+0 records out 00:05:09.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395139 s, 10.4 MB/s 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:09.119 08:42:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.119 08:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.119 08:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.119 { 00:05:09.119 "nbd_device": "/dev/nbd0", 00:05:09.119 "bdev_name": "Malloc0" 00:05:09.119 }, 00:05:09.119 { 00:05:09.119 "nbd_device": "/dev/nbd1", 00:05:09.119 "bdev_name": "Malloc1" 00:05:09.119 } 00:05:09.119 ]' 00:05:09.119 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.119 { 00:05:09.119 "nbd_device": "/dev/nbd0", 00:05:09.119 "bdev_name": "Malloc0" 00:05:09.119 }, 00:05:09.119 { 00:05:09.119 "nbd_device": "/dev/nbd1", 00:05:09.119 "bdev_name": "Malloc1" 00:05:09.119 } 00:05:09.119 ]' 00:05:09.119 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.378 /dev/nbd1' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.378 /dev/nbd1' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.378 256+0 records in 00:05:09.378 256+0 records out 00:05:09.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00923858 s, 113 MB/s 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.378 256+0 records in 00:05:09.378 256+0 records out 00:05:09.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214429 s, 48.9 MB/s 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.378 256+0 records in 00:05:09.378 256+0 records out 00:05:09.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270915 s, 38.7 MB/s 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.378 08:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.644 08:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.924 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.197 08:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.197 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.197 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.198 08:42:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.198 08:42:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.457 08:42:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.838 [2024-09-28 08:42:49.717047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.098 [2024-09-28 08:42:49.942308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.098 [2024-09-28 08:42:49.942314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.358 [2024-09-28 08:42:50.164605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.358 [2024-09-28 08:42:50.164672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.738 spdk_app_start Round 1 00:05:13.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.738 08:42:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.738 08:42:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.738 08:42:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58306 /var/tmp/spdk-nbd.sock 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.738 08:42:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:13.738 08:42:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.997 Malloc0 00:05:13.997 08:42:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.257 Malloc1 00:05:14.257 08:42:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.257 08:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.517 /dev/nbd0 00:05:14.517 08:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.517 08:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.517 08:42:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.517 1+0 records in 00:05:14.517 1+0 records out 00:05:14.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385864 s, 10.6 MB/s 00:05:14.518 08:42:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.518 08:42:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.518 08:42:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.518 08:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.518 08:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.518 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.518 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.518 08:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.778 /dev/nbd1 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.778 1+0 records in 00:05:14.778 1+0 records out 00:05:14.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034039 s, 12.0 MB/s 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.778 08:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.778 08:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.038 { 00:05:15.038 "nbd_device": "/dev/nbd0", 00:05:15.038 "bdev_name": "Malloc0" 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "nbd_device": "/dev/nbd1", 00:05:15.038 "bdev_name": "Malloc1" 00:05:15.038 } 00:05:15.038 ]' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.038 { 00:05:15.038 "nbd_device": "/dev/nbd0", 00:05:15.038 "bdev_name": "Malloc0" 00:05:15.038 }, 00:05:15.038 { 00:05:15.038 "nbd_device": "/dev/nbd1", 00:05:15.038 "bdev_name": "Malloc1" 00:05:15.038 } 00:05:15.038 ]' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.038 /dev/nbd1' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.038 /dev/nbd1' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.038 256+0 records in 00:05:15.038 256+0 records out 00:05:15.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104136 s, 101 MB/s 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.038 08:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.038 256+0 records in 00:05:15.038 256+0 records out 00:05:15.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208801 s, 50.2 MB/s 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.039 256+0 records in 00:05:15.039 256+0 records out 00:05:15.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256142 s, 40.9 MB/s 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.039 08:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.039 08:42:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.298 08:42:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.557 08:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.817 08:42:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.817 08:42:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.385 08:42:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.765 [2024-09-28 08:42:55.454195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.765 [2024-09-28 08:42:55.680457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.765 [2024-09-28 08:42:55.680486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.025 [2024-09-28 08:42:55.898718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.025 [2024-09-28 08:42:55.898793] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.403 spdk_app_start Round 2 00:05:19.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.403 08:42:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.404 08:42:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:19.404 08:42:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58306 /var/tmp/spdk-nbd.sock 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.404 08:42:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:19.404 08:42:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.663 Malloc0 00:05:19.663 08:42:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.922 Malloc1 00:05:19.922 08:42:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.922 08:42:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.182 /dev/nbd0 00:05:20.182 08:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.182 08:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.182 1+0 records in 00:05:20.182 1+0 records out 00:05:20.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213616 s, 19.2 MB/s 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:20.182 08:42:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:20.182 08:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.182 08:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.182 08:42:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.441 /dev/nbd1 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.441 1+0 records in 00:05:20.441 1+0 records out 00:05:20.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349791 s, 11.7 MB/s 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:20.441 08:42:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.441 08:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.700 08:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.700 { 00:05:20.701 "nbd_device": "/dev/nbd0", 00:05:20.701 "bdev_name": "Malloc0" 00:05:20.701 }, 00:05:20.701 { 00:05:20.701 "nbd_device": "/dev/nbd1", 00:05:20.701 "bdev_name": "Malloc1" 00:05:20.701 } 00:05:20.701 ]' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.701 { 00:05:20.701 "nbd_device": "/dev/nbd0", 00:05:20.701 "bdev_name": "Malloc0" 00:05:20.701 }, 00:05:20.701 { 00:05:20.701 "nbd_device": "/dev/nbd1", 00:05:20.701 "bdev_name": "Malloc1" 00:05:20.701 } 00:05:20.701 ]' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.701 /dev/nbd1' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.701 /dev/nbd1' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.701 256+0 records in 00:05:20.701 256+0 records out 00:05:20.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145928 s, 71.9 MB/s 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.701 256+0 records in 00:05:20.701 256+0 records out 00:05:20.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021141 s, 49.6 MB/s 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.701 08:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.960 256+0 records in 00:05:20.960 256+0 records out 00:05:20.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255933 s, 41.0 MB/s 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.960 08:42:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.220 08:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.479 08:42:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.479 08:42:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.048 08:42:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.429 [2024-09-28 08:43:01.170016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.429 [2024-09-28 08:43:01.397972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.429 [2024-09-28 08:43:01.397977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.688 [2024-09-28 08:43:01.616949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.688 [2024-09-28 08:43:01.617059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.067 08:43:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58306 /var/tmp/spdk-nbd.sock 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.067 08:43:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.067 08:43:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.067 08:43:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.067 08:43:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58306 00:05:25.067 08:43:03 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58306 ']' 00:05:25.067 08:43:03 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58306 00:05:25.067 08:43:03 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:25.068 08:43:03 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.068 08:43:03 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58306 00:05:25.327 killing process with pid 58306 00:05:25.327 08:43:03 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.327 08:43:03 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.327 08:43:03 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58306' 00:05:25.327 08:43:03 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58306 00:05:25.327 08:43:03 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58306 00:05:26.707 spdk_app_start is called in Round 0. 00:05:26.707 Shutdown signal received, stop current app iteration 00:05:26.707 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:26.707 spdk_app_start is called in Round 1. 00:05:26.707 Shutdown signal received, stop current app iteration 00:05:26.707 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:26.707 spdk_app_start is called in Round 2. 00:05:26.707 Shutdown signal received, stop current app iteration 00:05:26.707 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:26.707 spdk_app_start is called in Round 3. 00:05:26.707 Shutdown signal received, stop current app iteration 00:05:26.707 08:43:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:26.707 08:43:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:26.707 00:05:26.707 real 0m19.386s 00:05:26.707 user 0m39.720s 00:05:26.707 sys 0m3.221s 00:05:26.707 08:43:04 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.707 08:43:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.707 ************************************ 00:05:26.707 END TEST app_repeat 00:05:26.707 ************************************ 00:05:26.707 08:43:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:26.707 08:43:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.707 08:43:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.707 08:43:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.707 08:43:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.707 ************************************ 00:05:26.707 START TEST cpu_locks 00:05:26.707 ************************************ 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.707 * Looking for test storage... 00:05:26.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.707 08:43:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.707 --rc genhtml_branch_coverage=1 00:05:26.707 --rc genhtml_function_coverage=1 00:05:26.707 --rc genhtml_legend=1 00:05:26.707 --rc geninfo_all_blocks=1 00:05:26.707 --rc geninfo_unexecuted_blocks=1 00:05:26.707 00:05:26.707 ' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.707 --rc genhtml_branch_coverage=1 00:05:26.707 --rc genhtml_function_coverage=1 00:05:26.707 --rc genhtml_legend=1 00:05:26.707 --rc geninfo_all_blocks=1 00:05:26.707 --rc geninfo_unexecuted_blocks=1 00:05:26.707 00:05:26.707 ' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.707 --rc genhtml_branch_coverage=1 00:05:26.707 --rc genhtml_function_coverage=1 00:05:26.707 --rc genhtml_legend=1 00:05:26.707 --rc geninfo_all_blocks=1 00:05:26.707 --rc geninfo_unexecuted_blocks=1 00:05:26.707 00:05:26.707 ' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.707 --rc genhtml_branch_coverage=1 00:05:26.707 --rc genhtml_function_coverage=1 00:05:26.707 --rc genhtml_legend=1 00:05:26.707 --rc geninfo_all_blocks=1 00:05:26.707 --rc geninfo_unexecuted_blocks=1 00:05:26.707 00:05:26.707 ' 00:05:26.707 08:43:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:26.707 08:43:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:26.707 08:43:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:26.707 08:43:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.707 08:43:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.707 ************************************ 00:05:26.707 START TEST default_locks 00:05:26.707 ************************************ 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58748 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58748 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58748 ']' 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.707 08:43:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.967 [2024-09-28 08:43:04.734717] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:26.967 [2024-09-28 08:43:04.735381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58748 ] 00:05:26.967 [2024-09-28 08:43:04.903904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.227 [2024-09-28 08:43:05.152041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.185 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.185 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:28.185 08:43:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58748 00:05:28.185 08:43:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58748 00:05:28.185 08:43:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.761 08:43:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58748 00:05:28.761 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58748 ']' 00:05:28.761 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58748 00:05:28.761 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:28.761 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58748 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.762 killing process with pid 58748 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58748' 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58748 00:05:28.762 08:43:06 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58748 00:05:31.298 08:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58748 00:05:31.298 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:31.298 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58748 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58748 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58748 ']' 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58748) - No such process 00:05:31.299 ERROR: process (pid: 58748) is no longer running 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.299 00:05:31.299 real 0m4.527s 00:05:31.299 user 0m4.245s 00:05:31.299 sys 0m0.829s 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.299 08:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.299 ************************************ 00:05:31.299 END TEST default_locks 00:05:31.299 ************************************ 00:05:31.299 08:43:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.299 08:43:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.299 08:43:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.299 08:43:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.299 ************************************ 00:05:31.299 START TEST default_locks_via_rpc 00:05:31.299 ************************************ 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58828 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58828 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58828 ']' 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.299 08:43:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.559 [2024-09-28 08:43:09.330383] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:31.559 [2024-09-28 08:43:09.330527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58828 ] 00:05:31.559 [2024-09-28 08:43:09.495183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.818 [2024-09-28 08:43:09.745017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.756 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.756 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.756 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.756 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.756 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58828 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58828 00:05:33.014 08:43:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58828 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58828 ']' 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58828 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58828 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.273 killing process with pid 58828 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58828' 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58828 00:05:33.273 08:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58828 00:05:35.812 00:05:35.812 real 0m4.508s 00:05:35.812 user 0m4.266s 00:05:35.812 sys 0m0.779s 00:05:35.812 08:43:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.812 08:43:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.812 ************************************ 00:05:35.812 END TEST default_locks_via_rpc 00:05:35.812 ************************************ 00:05:35.812 08:43:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:35.812 08:43:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.812 08:43:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.812 08:43:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.812 ************************************ 00:05:35.812 START TEST non_locking_app_on_locked_coremask 00:05:35.812 ************************************ 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58908 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58908 /var/tmp/spdk.sock 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58908 ']' 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.812 08:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.073 [2024-09-28 08:43:13.905734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:36.073 [2024-09-28 08:43:13.905866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:05:36.333 [2024-09-28 08:43:14.075091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.333 [2024-09-28 08:43:14.316293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.714 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58924 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58924 /var/tmp/spdk2.sock 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58924 ']' 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.715 08:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.715 [2024-09-28 08:43:15.405051] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:37.715 [2024-09-28 08:43:15.405170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58924 ] 00:05:37.715 [2024-09-28 08:43:15.563401] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.715 [2024-09-28 08:43:15.563445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.284 [2024-09-28 08:43:16.074987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.192 08:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.192 08:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.192 08:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58908 00:05:40.192 08:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58908 00:05:40.192 08:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58908 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58908 ']' 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58908 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58908 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.130 killing process with pid 58908 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58908' 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58908 00:05:41.130 08:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58908 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58924 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58924 ']' 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58924 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58924 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.467 killing process with pid 58924 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58924' 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58924 00:05:46.467 08:43:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58924 00:05:49.009 00:05:49.009 real 0m13.050s 00:05:49.009 user 0m12.881s 00:05:49.009 sys 0m1.802s 00:05:49.009 08:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.009 08:43:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.009 ************************************ 00:05:49.009 END TEST non_locking_app_on_locked_coremask 00:05:49.009 ************************************ 00:05:49.009 08:43:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.009 08:43:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.009 08:43:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.009 08:43:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.009 ************************************ 00:05:49.009 START TEST locking_app_on_unlocked_coremask 00:05:49.009 ************************************ 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59088 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59088 /var/tmp/spdk.sock 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59088 ']' 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.009 08:43:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.269 [2024-09-28 08:43:27.023062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:49.269 [2024-09-28 08:43:27.023188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:05:49.269 [2024-09-28 08:43:27.192351] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.269 [2024-09-28 08:43:27.192414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.529 [2024-09-28 08:43:27.446536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59115 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59115 /var/tmp/spdk2.sock 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59115 ']' 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.469 08:43:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.729 [2024-09-28 08:43:28.540924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:50.729 [2024-09-28 08:43:28.541043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:05:50.729 [2024-09-28 08:43:28.692249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.299 [2024-09-28 08:43:29.189987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.208 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.208 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.208 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59115 00:05:53.208 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59115 00:05:53.208 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.468 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59088 00:05:53.468 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59088 ']' 00:05:53.468 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59088 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59088 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.728 killing process with pid 59088 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59088' 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59088 00:05:53.728 08:43:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59088 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59115 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59115 ']' 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59115 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59115 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.005 killing process with pid 59115 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59115' 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59115 00:05:59.005 08:43:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59115 00:06:01.543 00:06:01.543 real 0m12.568s 00:06:01.543 user 0m12.405s 00:06:01.543 sys 0m1.549s 00:06:01.543 08:43:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.543 08:43:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.543 ************************************ 00:06:01.543 END TEST locking_app_on_unlocked_coremask 00:06:01.543 ************************************ 00:06:01.804 08:43:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.804 08:43:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.804 08:43:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.804 08:43:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.804 ************************************ 00:06:01.804 START TEST locking_app_on_locked_coremask 00:06:01.804 ************************************ 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59271 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59271 /var/tmp/spdk.sock 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59271 ']' 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.804 08:43:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.804 [2024-09-28 08:43:39.663451] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.804 [2024-09-28 08:43:39.663586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:06:02.064 [2024-09-28 08:43:39.832897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.323 [2024-09-28 08:43:40.075574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59291 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59291 /var/tmp/spdk2.sock 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59291 /var/tmp/spdk2.sock 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59291 /var/tmp/spdk2.sock 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59291 ']' 00:06:03.340 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.341 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.341 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.341 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.341 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.341 [2024-09-28 08:43:41.141006] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:03.341 [2024-09-28 08:43:41.141139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:06:03.341 [2024-09-28 08:43:41.299903] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59271 has claimed it. 00:06:03.341 [2024-09-28 08:43:41.299979] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59291) - No such process 00:06:03.909 ERROR: process (pid: 59291) is no longer running 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59271 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59271 00:06:03.909 08:43:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59271 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59271 ']' 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59271 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.169 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59271 00:06:04.429 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.429 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.429 killing process with pid 59271 00:06:04.429 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59271' 00:06:04.429 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59271 00:06:04.429 08:43:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59271 00:06:06.965 00:06:06.965 real 0m5.300s 00:06:06.965 user 0m5.260s 00:06:06.965 sys 0m0.989s 00:06:06.965 08:43:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.965 08:43:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.965 ************************************ 00:06:06.965 END TEST locking_app_on_locked_coremask 00:06:06.965 ************************************ 00:06:06.965 08:43:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.965 08:43:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.965 08:43:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.965 08:43:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.965 ************************************ 00:06:06.965 START TEST locking_overlapped_coremask 00:06:06.965 ************************************ 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59364 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59364 /var/tmp/spdk.sock 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59364 ']' 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.965 08:43:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.224 [2024-09-28 08:43:45.033200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.224 [2024-09-28 08:43:45.033365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:06:07.224 [2024-09-28 08:43:45.202998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.483 [2024-09-28 08:43:45.453858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.483 [2024-09-28 08:43:45.453903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.483 [2024-09-28 08:43:45.453954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59387 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59387 /var/tmp/spdk2.sock 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59387 /var/tmp/spdk2.sock 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59387 /var/tmp/spdk2.sock 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59387 ']' 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.861 08:43:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.861 [2024-09-28 08:43:46.560130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.861 [2024-09-28 08:43:46.560258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59387 ] 00:06:08.861 [2024-09-28 08:43:46.722053] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59364 has claimed it. 00:06:08.861 [2024-09-28 08:43:46.722118] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.430 ERROR: process (pid: 59387) is no longer running 00:06:09.430 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59387) - No such process 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59364 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59364 ']' 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59364 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59364 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.430 killing process with pid 59364 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59364' 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59364 00:06:09.430 08:43:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59364 00:06:12.726 00:06:12.726 real 0m5.049s 00:06:12.726 user 0m13.069s 00:06:12.726 sys 0m0.805s 00:06:12.726 08:43:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.726 ************************************ 00:06:12.726 END TEST locking_overlapped_coremask 00:06:12.726 ************************************ 00:06:12.726 08:43:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.726 08:43:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.726 08:43:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.726 08:43:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.726 08:43:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.726 ************************************ 00:06:12.726 START TEST locking_overlapped_coremask_via_rpc 00:06:12.726 ************************************ 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59457 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59457 /var/tmp/spdk.sock 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.726 08:43:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.726 [2024-09-28 08:43:50.146827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:12.726 [2024-09-28 08:43:50.146974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:06:12.726 [2024-09-28 08:43:50.314581] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.726 [2024-09-28 08:43:50.314657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.726 [2024-09-28 08:43:50.559736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.726 [2024-09-28 08:43:50.559818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.726 [2024-09-28 08:43:50.559858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59475 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59475 /var/tmp/spdk2.sock 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59475 ']' 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.666 08:43:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 [2024-09-28 08:43:51.674623] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:13.926 [2024-09-28 08:43:51.674760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ] 00:06:13.926 [2024-09-28 08:43:51.832582] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.926 [2024-09-28 08:43:51.832629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.495 [2024-09-28 08:43:52.360383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.495 [2024-09-28 08:43:52.363862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.495 [2024-09-28 08:43:52.363910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.403 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.403 [2024-09-28 08:43:54.390889] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59457 has claimed it. 00:06:16.663 request: 00:06:16.663 { 00:06:16.663 "method": "framework_enable_cpumask_locks", 00:06:16.663 "req_id": 1 00:06:16.663 } 00:06:16.663 Got JSON-RPC error response 00:06:16.663 response: 00:06:16.663 { 00:06:16.663 "code": -32603, 00:06:16.663 "message": "Failed to claim CPU core: 2" 00:06:16.663 } 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59457 /var/tmp/spdk.sock 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59475 /var/tmp/spdk2.sock 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59475 ']' 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.663 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.923 00:06:16.923 real 0m4.800s 00:06:16.923 user 0m1.264s 00:06:16.923 sys 0m0.224s 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.923 08:43:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.923 ************************************ 00:06:16.923 END TEST locking_overlapped_coremask_via_rpc 00:06:16.923 ************************************ 00:06:16.923 08:43:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.923 08:43:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59457 ]] 00:06:16.923 08:43:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59457 00:06:16.923 08:43:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59457 ']' 00:06:16.923 08:43:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59457 00:06:16.923 08:43:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:16.923 08:43:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.923 08:43:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59457 00:06:17.183 killing process with pid 59457 00:06:17.183 08:43:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.183 08:43:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.183 08:43:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59457' 00:06:17.183 08:43:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59457 00:06:17.183 08:43:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59457 00:06:20.487 08:43:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59475 ]] 00:06:20.487 08:43:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59475 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59475 ']' 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59475 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59475 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.487 killing process with pid 59475 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59475' 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59475 00:06:20.487 08:43:57 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59475 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59457 ]] 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59457 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59457 ']' 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59457 00:06:23.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59457) - No such process 00:06:23.027 Process with pid 59457 is not found 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59457 is not found' 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59475 ]] 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59475 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59475 ']' 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59475 00:06:23.027 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59475) - No such process 00:06:23.027 Process with pid 59475 is not found 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59475 is not found' 00:06:23.027 08:44:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.027 00:06:23.027 real 0m56.113s 00:06:23.027 user 1m32.136s 00:06:23.027 sys 0m8.587s 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.027 08:44:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 ************************************ 00:06:23.027 END TEST cpu_locks 00:06:23.027 ************************************ 00:06:23.027 00:06:23.027 real 1m28.457s 00:06:23.027 user 2m32.658s 00:06:23.027 sys 0m13.309s 00:06:23.027 08:44:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.027 08:44:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 ************************************ 00:06:23.027 END TEST event 00:06:23.027 ************************************ 00:06:23.027 08:44:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.027 08:44:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.027 08:44:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.027 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:06:23.027 ************************************ 00:06:23.027 START TEST thread 00:06:23.027 ************************************ 00:06:23.027 08:44:00 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.027 * Looking for test storage... 00:06:23.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.027 08:44:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:23.027 08:44:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:23.027 08:44:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:23.027 08:44:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:23.027 08:44:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.027 08:44:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.027 08:44:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.027 08:44:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.027 08:44:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.027 08:44:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.027 08:44:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.027 08:44:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.028 08:44:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.028 08:44:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.028 08:44:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.028 08:44:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.028 08:44:00 thread -- scripts/common.sh@345 -- # : 1 00:06:23.028 08:44:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.028 08:44:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.028 08:44:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.028 08:44:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.028 08:44:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.028 08:44:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.028 08:44:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.028 08:44:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.028 08:44:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.028 08:44:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.028 08:44:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.028 08:44:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.028 08:44:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.028 08:44:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.028 08:44:00 thread -- scripts/common.sh@368 -- # return 0 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.028 --rc genhtml_branch_coverage=1 00:06:23.028 --rc genhtml_function_coverage=1 00:06:23.028 --rc genhtml_legend=1 00:06:23.028 --rc geninfo_all_blocks=1 00:06:23.028 --rc geninfo_unexecuted_blocks=1 00:06:23.028 00:06:23.028 ' 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.028 --rc genhtml_branch_coverage=1 00:06:23.028 --rc genhtml_function_coverage=1 00:06:23.028 --rc genhtml_legend=1 00:06:23.028 --rc geninfo_all_blocks=1 00:06:23.028 --rc geninfo_unexecuted_blocks=1 00:06:23.028 00:06:23.028 ' 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.028 --rc genhtml_branch_coverage=1 00:06:23.028 --rc genhtml_function_coverage=1 00:06:23.028 --rc genhtml_legend=1 00:06:23.028 --rc geninfo_all_blocks=1 00:06:23.028 --rc geninfo_unexecuted_blocks=1 00:06:23.028 00:06:23.028 ' 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.028 --rc genhtml_branch_coverage=1 00:06:23.028 --rc genhtml_function_coverage=1 00:06:23.028 --rc genhtml_legend=1 00:06:23.028 --rc geninfo_all_blocks=1 00:06:23.028 --rc geninfo_unexecuted_blocks=1 00:06:23.028 00:06:23.028 ' 00:06:23.028 08:44:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.028 08:44:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.028 ************************************ 00:06:23.028 START TEST thread_poller_perf 00:06:23.028 ************************************ 00:06:23.028 08:44:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.028 [2024-09-28 08:44:00.909229] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:23.028 [2024-09-28 08:44:00.909332] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59681 ] 00:06:23.287 [2024-09-28 08:44:01.077546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.546 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.546 [2024-09-28 08:44:01.323047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.924 ====================================== 00:06:24.924 busy:2300226272 (cyc) 00:06:24.924 total_run_count: 430000 00:06:24.924 tsc_hz: 2290000000 (cyc) 00:06:24.924 ====================================== 00:06:24.924 poller_cost: 5349 (cyc), 2335 (nsec) 00:06:24.924 00:06:24.924 real 0m1.877s 00:06:24.924 user 0m1.622s 00:06:24.924 sys 0m0.147s 00:06:24.924 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.924 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.924 ************************************ 00:06:24.924 END TEST thread_poller_perf 00:06:24.924 ************************************ 00:06:24.924 08:44:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.924 08:44:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.924 08:44:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.924 08:44:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.924 ************************************ 00:06:24.924 START TEST thread_poller_perf 00:06:24.924 ************************************ 00:06:24.924 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.924 [2024-09-28 08:44:02.857917] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:24.924 [2024-09-28 08:44:02.858019] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:06:25.183 [2024-09-28 08:44:03.020978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.442 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.442 [2024-09-28 08:44:03.265766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.819 ====================================== 00:06:26.819 busy:2293431062 (cyc) 00:06:26.819 total_run_count: 5566000 00:06:26.819 tsc_hz: 2290000000 (cyc) 00:06:26.819 ====================================== 00:06:26.819 poller_cost: 412 (cyc), 179 (nsec) 00:06:26.819 00:06:26.819 real 0m1.866s 00:06:26.819 user 0m1.633s 00:06:26.820 sys 0m0.125s 00:06:26.820 08:44:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.820 08:44:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.820 ************************************ 00:06:26.820 END TEST thread_poller_perf 00:06:26.820 ************************************ 00:06:26.820 08:44:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.820 00:06:26.820 real 0m4.104s 00:06:26.820 user 0m3.420s 00:06:26.820 sys 0m0.482s 00:06:26.820 08:44:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.820 08:44:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.820 ************************************ 00:06:26.820 END TEST thread 00:06:26.820 ************************************ 00:06:26.820 08:44:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.820 08:44:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.820 08:44:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.820 08:44:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.820 08:44:04 -- common/autotest_common.sh@10 -- # set +x 00:06:26.820 ************************************ 00:06:26.820 START TEST app_cmdline 00:06:26.820 ************************************ 00:06:26.820 08:44:04 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.079 * Looking for test storage... 00:06:27.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.079 08:44:04 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.079 08:44:04 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.079 08:44:04 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.079 08:44:04 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.079 08:44:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.079 08:44:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.080 08:44:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.080 --rc genhtml_branch_coverage=1 00:06:27.080 --rc genhtml_function_coverage=1 00:06:27.080 --rc genhtml_legend=1 00:06:27.080 --rc geninfo_all_blocks=1 00:06:27.080 --rc geninfo_unexecuted_blocks=1 00:06:27.080 00:06:27.080 ' 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.080 --rc genhtml_branch_coverage=1 00:06:27.080 --rc genhtml_function_coverage=1 00:06:27.080 --rc genhtml_legend=1 00:06:27.080 --rc geninfo_all_blocks=1 00:06:27.080 --rc geninfo_unexecuted_blocks=1 00:06:27.080 00:06:27.080 ' 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.080 --rc genhtml_branch_coverage=1 00:06:27.080 --rc genhtml_function_coverage=1 00:06:27.080 --rc genhtml_legend=1 00:06:27.080 --rc geninfo_all_blocks=1 00:06:27.080 --rc geninfo_unexecuted_blocks=1 00:06:27.080 00:06:27.080 ' 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.080 --rc genhtml_branch_coverage=1 00:06:27.080 --rc genhtml_function_coverage=1 00:06:27.080 --rc genhtml_legend=1 00:06:27.080 --rc geninfo_all_blocks=1 00:06:27.080 --rc geninfo_unexecuted_blocks=1 00:06:27.080 00:06:27.080 ' 00:06:27.080 08:44:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.080 08:44:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59812 00:06:27.080 08:44:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.080 08:44:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59812 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59812 ']' 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.080 08:44:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.339 [2024-09-28 08:44:05.129388] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:27.340 [2024-09-28 08:44:05.129633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:06:27.340 [2024-09-28 08:44:05.296961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.599 [2024-09-28 08:44:05.542539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.537 08:44:06 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.537 08:44:06 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:28.537 08:44:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.797 { 00:06:28.797 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:28.797 "fields": { 00:06:28.797 "major": 25, 00:06:28.797 "minor": 1, 00:06:28.797 "patch": 0, 00:06:28.797 "suffix": "-pre", 00:06:28.797 "commit": "09cc66129" 00:06:28.797 } 00:06:28.797 } 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.797 08:44:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.797 08:44:06 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.057 request: 00:06:29.057 { 00:06:29.057 "method": "env_dpdk_get_mem_stats", 00:06:29.057 "req_id": 1 00:06:29.057 } 00:06:29.057 Got JSON-RPC error response 00:06:29.057 response: 00:06:29.057 { 00:06:29.057 "code": -32601, 00:06:29.057 "message": "Method not found" 00:06:29.057 } 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.057 08:44:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59812 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59812 ']' 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59812 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.057 08:44:06 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59812 00:06:29.057 killing process with pid 59812 00:06:29.057 08:44:07 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.057 08:44:07 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.057 08:44:07 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59812' 00:06:29.057 08:44:07 app_cmdline -- common/autotest_common.sh@969 -- # kill 59812 00:06:29.057 08:44:07 app_cmdline -- common/autotest_common.sh@974 -- # wait 59812 00:06:32.353 00:06:32.353 real 0m4.858s 00:06:32.353 user 0m4.841s 00:06:32.353 sys 0m0.794s 00:06:32.353 08:44:09 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.353 08:44:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 END TEST app_cmdline 00:06:32.354 ************************************ 00:06:32.354 08:44:09 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:32.354 08:44:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.354 08:44:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.354 08:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:32.354 ************************************ 00:06:32.354 START TEST version 00:06:32.354 ************************************ 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:32.354 * Looking for test storage... 00:06:32.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.354 08:44:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.354 08:44:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.354 08:44:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.354 08:44:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.354 08:44:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.354 08:44:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.354 08:44:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.354 08:44:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.354 08:44:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.354 08:44:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.354 08:44:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.354 08:44:09 version -- scripts/common.sh@344 -- # case "$op" in 00:06:32.354 08:44:09 version -- scripts/common.sh@345 -- # : 1 00:06:32.354 08:44:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.354 08:44:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.354 08:44:09 version -- scripts/common.sh@365 -- # decimal 1 00:06:32.354 08:44:09 version -- scripts/common.sh@353 -- # local d=1 00:06:32.354 08:44:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.354 08:44:09 version -- scripts/common.sh@355 -- # echo 1 00:06:32.354 08:44:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.354 08:44:09 version -- scripts/common.sh@366 -- # decimal 2 00:06:32.354 08:44:09 version -- scripts/common.sh@353 -- # local d=2 00:06:32.354 08:44:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.354 08:44:09 version -- scripts/common.sh@355 -- # echo 2 00:06:32.354 08:44:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.354 08:44:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.354 08:44:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.354 08:44:09 version -- scripts/common.sh@368 -- # return 0 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.354 --rc genhtml_branch_coverage=1 00:06:32.354 --rc genhtml_function_coverage=1 00:06:32.354 --rc genhtml_legend=1 00:06:32.354 --rc geninfo_all_blocks=1 00:06:32.354 --rc geninfo_unexecuted_blocks=1 00:06:32.354 00:06:32.354 ' 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.354 --rc genhtml_branch_coverage=1 00:06:32.354 --rc genhtml_function_coverage=1 00:06:32.354 --rc genhtml_legend=1 00:06:32.354 --rc geninfo_all_blocks=1 00:06:32.354 --rc geninfo_unexecuted_blocks=1 00:06:32.354 00:06:32.354 ' 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.354 --rc genhtml_branch_coverage=1 00:06:32.354 --rc genhtml_function_coverage=1 00:06:32.354 --rc genhtml_legend=1 00:06:32.354 --rc geninfo_all_blocks=1 00:06:32.354 --rc geninfo_unexecuted_blocks=1 00:06:32.354 00:06:32.354 ' 00:06:32.354 08:44:09 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.354 --rc genhtml_branch_coverage=1 00:06:32.354 --rc genhtml_function_coverage=1 00:06:32.354 --rc genhtml_legend=1 00:06:32.354 --rc geninfo_all_blocks=1 00:06:32.354 --rc geninfo_unexecuted_blocks=1 00:06:32.354 00:06:32.354 ' 00:06:32.354 08:44:09 version -- app/version.sh@17 -- # get_header_version major 00:06:32.354 08:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # cut -f2 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.354 08:44:09 version -- app/version.sh@17 -- # major=25 00:06:32.354 08:44:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.354 08:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # cut -f2 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.354 08:44:09 version -- app/version.sh@18 -- # minor=1 00:06:32.354 08:44:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.354 08:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # cut -f2 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.354 08:44:09 version -- app/version.sh@19 -- # patch=0 00:06:32.354 08:44:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.354 08:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # cut -f2 00:06:32.354 08:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.354 08:44:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.354 08:44:09 version -- app/version.sh@22 -- # version=25.1 00:06:32.354 08:44:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.354 08:44:09 version -- app/version.sh@28 -- # version=25.1rc0 00:06:32.354 08:44:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:32.354 08:44:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.354 08:44:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:32.354 08:44:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:32.354 00:06:32.354 real 0m0.311s 00:06:32.354 user 0m0.156s 00:06:32.354 sys 0m0.209s 00:06:32.354 ************************************ 00:06:32.354 END TEST version 00:06:32.354 ************************************ 00:06:32.354 08:44:10 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.354 08:44:10 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.354 08:44:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:32.354 08:44:10 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:32.354 08:44:10 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:32.354 08:44:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.354 08:44:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.354 08:44:10 -- common/autotest_common.sh@10 -- # set +x 00:06:32.354 ************************************ 00:06:32.354 START TEST bdev_raid 00:06:32.354 ************************************ 00:06:32.354 08:44:10 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:32.354 * Looking for test storage... 00:06:32.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:32.354 08:44:10 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.354 08:44:10 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.354 08:44:10 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.354 08:44:10 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.354 08:44:10 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.355 08:44:10 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.355 --rc genhtml_branch_coverage=1 00:06:32.355 --rc genhtml_function_coverage=1 00:06:32.355 --rc genhtml_legend=1 00:06:32.355 --rc geninfo_all_blocks=1 00:06:32.355 --rc geninfo_unexecuted_blocks=1 00:06:32.355 00:06:32.355 ' 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.355 --rc genhtml_branch_coverage=1 00:06:32.355 --rc genhtml_function_coverage=1 00:06:32.355 --rc genhtml_legend=1 00:06:32.355 --rc geninfo_all_blocks=1 00:06:32.355 --rc geninfo_unexecuted_blocks=1 00:06:32.355 00:06:32.355 ' 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.355 --rc genhtml_branch_coverage=1 00:06:32.355 --rc genhtml_function_coverage=1 00:06:32.355 --rc genhtml_legend=1 00:06:32.355 --rc geninfo_all_blocks=1 00:06:32.355 --rc geninfo_unexecuted_blocks=1 00:06:32.355 00:06:32.355 ' 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.355 --rc genhtml_branch_coverage=1 00:06:32.355 --rc genhtml_function_coverage=1 00:06:32.355 --rc genhtml_legend=1 00:06:32.355 --rc geninfo_all_blocks=1 00:06:32.355 --rc geninfo_unexecuted_blocks=1 00:06:32.355 00:06:32.355 ' 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:32.355 08:44:10 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:32.355 08:44:10 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.355 08:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.615 ************************************ 00:06:32.615 START TEST raid1_resize_data_offset_test 00:06:32.615 ************************************ 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60010 00:06:32.615 Process raid pid: 60010 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60010' 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60010 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60010 ']' 00:06:32.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.615 08:44:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.615 [2024-09-28 08:44:10.437245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:32.615 [2024-09-28 08:44:10.437384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.615 [2024-09-28 08:44:10.607793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.876 [2024-09-28 08:44:10.842127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.136 [2024-09-28 08:44:11.075323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.136 [2024-09-28 08:44:11.075358] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.396 malloc0 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.396 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 malloc1 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 null0 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.656 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 [2024-09-28 08:44:11.471274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:33.656 [2024-09-28 08:44:11.473369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:33.656 [2024-09-28 08:44:11.473457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:33.656 [2024-09-28 08:44:11.473668] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:33.656 [2024-09-28 08:44:11.473714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:33.656 [2024-09-28 08:44:11.474019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:33.656 [2024-09-28 08:44:11.474228] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:33.657 [2024-09-28 08:44:11.474274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:33.657 [2024-09-28 08:44:11.474467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.657 [2024-09-28 08:44:11.531121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.657 08:44:11 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.226 malloc2 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.226 [2024-09-28 08:44:12.158131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:34.226 [2024-09-28 08:44:12.175747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.226 [2024-09-28 08:44:12.177841] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:34.226 08:44:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.227 08:44:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:34.227 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.227 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.227 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60010 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60010 ']' 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60010 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60010 00:06:34.545 killing process with pid 60010 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60010' 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60010 00:06:34.545 08:44:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60010 00:06:34.545 [2024-09-28 08:44:12.271432] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.545 [2024-09-28 08:44:12.273342] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:34.545 [2024-09-28 08:44:12.273429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.545 [2024-09-28 08:44:12.273448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:34.545 [2024-09-28 08:44:12.302873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.545 [2024-09-28 08:44:12.303320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.545 [2024-09-28 08:44:12.303393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:36.645 [2024-09-28 08:44:14.185990] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:37.585 08:44:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:37.585 ************************************ 00:06:37.585 END TEST raid1_resize_data_offset_test 00:06:37.585 ************************************ 00:06:37.585 00:06:37.585 real 0m5.170s 00:06:37.585 user 0m4.832s 00:06:37.585 sys 0m0.773s 00:06:37.585 08:44:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.585 08:44:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.845 08:44:15 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:37.845 08:44:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.845 08:44:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.845 08:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:37.845 ************************************ 00:06:37.845 START TEST raid0_resize_superblock_test 00:06:37.845 ************************************ 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60100 00:06:37.845 Process raid pid: 60100 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60100' 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60100 00:06:37.845 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60100 ']' 00:06:37.846 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.846 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.846 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.846 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.846 08:44:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.846 [2024-09-28 08:44:15.686096] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:37.846 [2024-09-28 08:44:15.686309] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.105 [2024-09-28 08:44:15.856888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.364 [2024-09-28 08:44:16.100599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.364 [2024-09-28 08:44:16.334753] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.364 [2024-09-28 08:44:16.334852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.625 08:44:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.625 08:44:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:38.625 08:44:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:38.625 08:44:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.625 08:44:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.195 malloc0 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.195 [2024-09-28 08:44:17.123460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:39.195 [2024-09-28 08:44:17.123626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.195 [2024-09-28 08:44:17.123686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:39.195 [2024-09-28 08:44:17.123768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.195 [2024-09-28 08:44:17.126192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.195 [2024-09-28 08:44:17.126233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:39.195 pt0 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.195 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 0d2d3dfb-0ef3-4492-9813-f884a5f08e49 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 d7e45f3b-57db-4464-ab78-b7a6ce0d6822 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 359ddd95-053e-4697-8f4d-903f147b0a68 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 [2024-09-28 08:44:17.333488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev d7e45f3b-57db-4464-ab78-b7a6ce0d6822 is claimed 00:06:39.456 [2024-09-28 08:44:17.333685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 359ddd95-053e-4697-8f4d-903f147b0a68 is claimed 00:06:39.456 [2024-09-28 08:44:17.333863] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:39.456 [2024-09-28 08:44:17.333916] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:39.456 [2024-09-28 08:44:17.334220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:39.456 [2024-09-28 08:44:17.334466] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:39.456 [2024-09-28 08:44:17.334513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:39.456 [2024-09-28 08:44:17.334731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.456 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.456 [2024-09-28 08:44:17.449440] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.477361] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:39.717 [2024-09-28 08:44:17.477385] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd7e45f3b-57db-4464-ab78-b7a6ce0d6822' was resized: old size 131072, new size 204800 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.485316] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:39.717 [2024-09-28 08:44:17.485339] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '359ddd95-053e-4697-8f4d-903f147b0a68' was resized: old size 131072, new size 204800 00:06:39.717 [2024-09-28 08:44:17.485368] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.601181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.644901] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:39.717 [2024-09-28 08:44:17.644966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:39.717 [2024-09-28 08:44:17.644978] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:39.717 [2024-09-28 08:44:17.644996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:39.717 [2024-09-28 08:44:17.645109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.717 [2024-09-28 08:44:17.645144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:39.717 [2024-09-28 08:44:17.645156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.656848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:39.717 [2024-09-28 08:44:17.656903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.717 [2024-09-28 08:44:17.656923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:39.717 [2024-09-28 08:44:17.656934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.717 [2024-09-28 08:44:17.659387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.717 [2024-09-28 08:44:17.659423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:39.717 [2024-09-28 08:44:17.661120] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d7e45f3b-57db-4464-ab78-b7a6ce0d6822 00:06:39.717 [2024-09-28 08:44:17.661190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev d7e45f3b-57db-4464-ab78-b7a6ce0d6822 is claimed 00:06:39.717 [2024-09-28 08:44:17.661320] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 359ddd95-053e-4697-8f4d-903f147b0a68 00:06:39.717 [2024-09-28 08:44:17.661339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 359ddd95-053e-4697-8f4d-903f147b0a68 is claimed 00:06:39.717 [2024-09-28 08:44:17.661486] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 359ddd95-053e-4697-8f4d-903f147b0a68 (2) smaller than existing raid bdev Raid (3) 00:06:39.717 [2024-09-28 08:44:17.661508] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d7e45f3b-57db-4464-ab78-b7a6ce0d6822: File exists 00:06:39.717 [2024-09-28 08:44:17.661545] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:39.717 [2024-09-28 08:44:17.661557] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:39.717 [2024-09-28 08:44:17.661838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:39.717 pt0 00:06:39.717 [2024-09-28 08:44:17.662035] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:39.717 [2024-09-28 08:44:17.662056] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:39.717 [2024-09-28 08:44:17.662200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.717 [2024-09-28 08:44:17.685384] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.717 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60100 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60100 ']' 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60100 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60100 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60100' 00:06:39.977 killing process with pid 60100 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60100 00:06:39.977 [2024-09-28 08:44:17.767220] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.977 [2024-09-28 08:44:17.767288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.977 [2024-09-28 08:44:17.767331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:39.977 [2024-09-28 08:44:17.767340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:39.977 08:44:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60100 00:06:41.358 [2024-09-28 08:44:19.249571] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.738 ************************************ 00:06:42.738 END TEST raid0_resize_superblock_test 00:06:42.738 ************************************ 00:06:42.738 08:44:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:42.738 00:06:42.738 real 0m4.994s 00:06:42.738 user 0m4.981s 00:06:42.738 sys 0m0.787s 00:06:42.738 08:44:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.738 08:44:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.738 08:44:20 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:42.738 08:44:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:42.738 08:44:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.738 08:44:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.738 ************************************ 00:06:42.738 START TEST raid1_resize_superblock_test 00:06:42.738 ************************************ 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60204 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.738 Process raid pid: 60204 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60204' 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60204 00:06:42.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60204 ']' 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.738 08:44:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.997 [2024-09-28 08:44:20.750990] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.997 [2024-09-28 08:44:20.751263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.997 [2024-09-28 08:44:20.923067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.257 [2024-09-28 08:44:21.164125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.517 [2024-09-28 08:44:21.400762] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.517 [2024-09-28 08:44:21.400897] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.777 08:44:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.777 08:44:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:43.777 08:44:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:43.777 08:44:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.777 08:44:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.346 malloc0 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.346 [2024-09-28 08:44:22.200331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:44.346 [2024-09-28 08:44:22.200496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.346 [2024-09-28 08:44:22.200541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:44.346 [2024-09-28 08:44:22.200591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.346 [2024-09-28 08:44:22.202992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.346 [2024-09-28 08:44:22.203064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:44.346 pt0 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.346 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 f7b2223b-8e2b-4887-abea-44997df06950 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 3ca56df3-25d3-4372-a84b-3470a5ae1176 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 386ca512-a7de-486d-90ab-f4ecfa7ccdc8 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 [2024-09-28 08:44:22.405272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ca56df3-25d3-4372-a84b-3470a5ae1176 is claimed 00:06:44.606 [2024-09-28 08:44:22.405367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 386ca512-a7de-486d-90ab-f4ecfa7ccdc8 is claimed 00:06:44.606 [2024-09-28 08:44:22.405487] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:44.606 [2024-09-28 08:44:22.405504] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:44.606 [2024-09-28 08:44:22.405762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:44.606 [2024-09-28 08:44:22.405950] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:44.606 [2024-09-28 08:44:22.405962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:44.606 [2024-09-28 08:44:22.406140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:44.606 [2024-09-28 08:44:22.521252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.606 [2024-09-28 08:44:22.569109] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:44.606 [2024-09-28 08:44:22.569180] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3ca56df3-25d3-4372-a84b-3470a5ae1176' was resized: old size 131072, new size 204800 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.606 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.607 [2024-09-28 08:44:22.581056] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:44.607 [2024-09-28 08:44:22.581120] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '386ca512-a7de-486d-90ab-f4ecfa7ccdc8' was resized: old size 131072, new size 204800 00:06:44.607 [2024-09-28 08:44:22.581165] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:44.607 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.607 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:44.607 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.607 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:44.607 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 [2024-09-28 08:44:22.672983] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 [2024-09-28 08:44:22.728709] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:44.867 [2024-09-28 08:44:22.728819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:44.867 [2024-09-28 08:44:22.728882] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:44.867 [2024-09-28 08:44:22.729060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:44.867 [2024-09-28 08:44:22.729282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.867 [2024-09-28 08:44:22.729387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:44.867 [2024-09-28 08:44:22.729447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 [2024-09-28 08:44:22.740640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:44.867 [2024-09-28 08:44:22.740705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.867 [2024-09-28 08:44:22.740730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:44.867 [2024-09-28 08:44:22.740742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.867 [2024-09-28 08:44:22.743261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.867 [2024-09-28 08:44:22.743299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:44.867 [2024-09-28 08:44:22.744978] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3ca56df3-25d3-4372-a84b-3470a5ae1176 00:06:44.867 [2024-09-28 08:44:22.745036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ca56df3-25d3-4372-a84b-3470a5ae1176 is claimed 00:06:44.867 [2024-09-28 08:44:22.745140] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 386ca512-a7de-486d-90ab-f4ecfa7ccdc8 00:06:44.867 [2024-09-28 08:44:22.745159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 386ca512-a7de-486d-90ab-f4ecfa7ccdc8 is claimed 00:06:44.867 [2024-09-28 08:44:22.745309] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 386ca512-a7de-486d-90ab-f4ecfa7ccdc8 (2) smaller than existing raid bdev Raid (3) 00:06:44.867 [2024-09-28 08:44:22.745331] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3ca56df3-25d3-4372-a84b-3470a5ae1176: File exists 00:06:44.867 [2024-09-28 08:44:22.745366] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:44.867 [2024-09-28 08:44:22.745378] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:44.867 [2024-09-28 08:44:22.745626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:44.867 pt0 00:06:44.867 [2024-09-28 08:44:22.745801] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:44.867 [2024-09-28 08:44:22.745880] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:44.867 [2024-09-28 08:44:22.746057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:44.867 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.868 [2024-09-28 08:44:22.769248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60204 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60204 ']' 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60204 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60204 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60204' 00:06:44.868 killing process with pid 60204 00:06:44.868 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60204 00:06:44.868 [2024-09-28 08:44:22.848716] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:44.868 [2024-09-28 08:44:22.848843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.868 [2024-09-28 08:44:22.848921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 08:44:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60204 00:06:44.868 ee all in destruct 00:06:44.868 [2024-09-28 08:44:22.848962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:46.776 [2024-09-28 08:44:24.349664] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.713 08:44:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:47.713 00:06:47.713 real 0m5.026s 00:06:47.713 user 0m5.027s 00:06:47.713 sys 0m0.782s 00:06:47.713 08:44:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.713 08:44:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.713 ************************************ 00:06:47.713 END TEST raid1_resize_superblock_test 00:06:47.713 ************************************ 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:47.977 08:44:25 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:47.977 08:44:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.977 08:44:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.977 08:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.977 ************************************ 00:06:47.977 START TEST raid_function_test_raid0 00:06:47.977 ************************************ 00:06:47.977 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:47.977 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:47.977 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:47.977 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60301 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60301' 00:06:47.978 Process raid pid: 60301 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60301 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60301 ']' 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.978 08:44:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:47.978 [2024-09-28 08:44:25.871857] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:47.978 [2024-09-28 08:44:25.872068] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.239 [2024-09-28 08:44:26.041562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.498 [2024-09-28 08:44:26.281488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.758 [2024-09-28 08:44:26.512166] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.758 [2024-09-28 08:44:26.512326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.758 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.758 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:48.758 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:48.758 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.758 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.018 Base_1 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.018 Base_2 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.018 [2024-09-28 08:44:26.839059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.018 [2024-09-28 08:44:26.841210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.018 [2024-09-28 08:44:26.841361] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.018 [2024-09-28 08:44:26.841404] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.018 [2024-09-28 08:44:26.841711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.018 [2024-09-28 08:44:26.841909] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.018 [2024-09-28 08:44:26.841948] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:49.018 [2024-09-28 08:44:26.842151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.018 08:44:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:49.277 [2024-09-28 08:44:27.086739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:49.277 /dev/nbd0 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.277 1+0 records in 00:06:49.277 1+0 records out 00:06:49.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392543 s, 10.4 MB/s 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.277 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.537 { 00:06:49.537 "nbd_device": "/dev/nbd0", 00:06:49.537 "bdev_name": "raid" 00:06:49.537 } 00:06:49.537 ]' 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.537 { 00:06:49.537 "nbd_device": "/dev/nbd0", 00:06:49.537 "bdev_name": "raid" 00:06:49.537 } 00:06:49.537 ]' 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:49.537 4096+0 records in 00:06:49.537 4096+0 records out 00:06:49.537 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0340033 s, 61.7 MB/s 00:06:49.537 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:49.797 4096+0 records in 00:06:49.797 4096+0 records out 00:06:49.797 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.186185 s, 11.3 MB/s 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:49.797 128+0 records in 00:06:49.797 128+0 records out 00:06:49.797 65536 bytes (66 kB, 64 KiB) copied, 0.00119948 s, 54.6 MB/s 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:49.797 2035+0 records in 00:06:49.797 2035+0 records out 00:06:49.797 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0135829 s, 76.7 MB/s 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:49.797 456+0 records in 00:06:49.797 456+0 records out 00:06:49.797 233472 bytes (233 kB, 228 KiB) copied, 0.0030346 s, 76.9 MB/s 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.797 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.056 [2024-09-28 08:44:27.987740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.056 08:44:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60301 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60301 ']' 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60301 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:50.315 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60301 00:06:50.316 killing process with pid 60301 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60301' 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60301 00:06:50.316 [2024-09-28 08:44:28.304132] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.316 [2024-09-28 08:44:28.304264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.316 08:44:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60301 00:06:50.316 [2024-09-28 08:44:28.304316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.316 [2024-09-28 08:44:28.304330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:50.575 [2024-09-28 08:44:28.521587] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.955 ************************************ 00:06:51.955 END TEST raid_function_test_raid0 00:06:51.955 ************************************ 00:06:51.955 08:44:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:51.955 00:06:51.955 real 0m4.058s 00:06:51.955 user 0m4.535s 00:06:51.955 sys 0m1.079s 00:06:51.955 08:44:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.955 08:44:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:51.955 08:44:29 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:51.955 08:44:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.955 08:44:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.955 08:44:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.955 ************************************ 00:06:51.955 START TEST raid_function_test_concat 00:06:51.955 ************************************ 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60430 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60430' 00:06:51.955 Process raid pid: 60430 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60430 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60430 ']' 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.955 08:44:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 [2024-09-28 08:44:30.004974] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:52.215 [2024-09-28 08:44:30.005202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.215 [2024-09-28 08:44:30.175980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.474 [2024-09-28 08:44:30.423369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.734 [2024-09-28 08:44:30.651439] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.734 [2024-09-28 08:44:30.651576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 Base_1 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 Base_2 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 [2024-09-28 08:44:30.960342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.995 [2024-09-28 08:44:30.962613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.995 [2024-09-28 08:44:30.962694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.995 [2024-09-28 08:44:30.962724] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.995 [2024-09-28 08:44:30.963034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.995 [2024-09-28 08:44:30.963215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.995 [2024-09-28 08:44:30.963225] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:52.995 [2024-09-28 08:44:30.963411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.995 08:44:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.256 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:53.256 [2024-09-28 08:44:31.227920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:53.256 /dev/nbd0 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.516 1+0 records in 00:06:53.516 1+0 records out 00:06:53.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521879 s, 7.8 MB/s 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.516 { 00:06:53.516 "nbd_device": "/dev/nbd0", 00:06:53.516 "bdev_name": "raid" 00:06:53.516 } 00:06:53.516 ]' 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.516 { 00:06:53.516 "nbd_device": "/dev/nbd0", 00:06:53.516 "bdev_name": "raid" 00:06:53.516 } 00:06:53.516 ]' 00:06:53.516 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:53.776 4096+0 records in 00:06:53.776 4096+0 records out 00:06:53.776 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341364 s, 61.4 MB/s 00:06:53.776 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:54.036 4096+0 records in 00:06:54.036 4096+0 records out 00:06:54.036 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179607 s, 11.7 MB/s 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:54.036 128+0 records in 00:06:54.036 128+0 records out 00:06:54.036 65536 bytes (66 kB, 64 KiB) copied, 0.00125404 s, 52.3 MB/s 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:54.036 2035+0 records in 00:06:54.036 2035+0 records out 00:06:54.036 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148374 s, 70.2 MB/s 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:54.036 456+0 records in 00:06:54.036 456+0 records out 00:06:54.036 233472 bytes (233 kB, 228 KiB) copied, 0.0036918 s, 63.2 MB/s 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.036 08:44:31 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.296 [2024-09-28 08:44:32.130424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:54.296 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60430 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60430 ']' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60430 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60430 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.556 killing process with pid 60430 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60430' 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60430 00:06:54.556 [2024-09-28 08:44:32.444685] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.556 [2024-09-28 08:44:32.444814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.556 08:44:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60430 00:06:54.556 [2024-09-28 08:44:32.444871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.556 [2024-09-28 08:44:32.444885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:54.817 [2024-09-28 08:44:32.663241] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.213 08:44:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:56.213 00:06:56.213 real 0m4.079s 00:06:56.213 user 0m4.516s 00:06:56.213 sys 0m1.121s 00:06:56.213 08:44:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.213 ************************************ 00:06:56.213 END TEST raid_function_test_concat 00:06:56.213 ************************************ 00:06:56.213 08:44:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.213 08:44:34 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:56.213 08:44:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.213 08:44:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.213 08:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.213 ************************************ 00:06:56.213 START TEST raid0_resize_test 00:06:56.213 ************************************ 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:56.213 Process raid pid: 60564 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60564 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60564' 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60564 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60564 ']' 00:06:56.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.213 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.213 [2024-09-28 08:44:34.149436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:56.213 [2024-09-28 08:44:34.149565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.473 [2024-09-28 08:44:34.316524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.732 [2024-09-28 08:44:34.568283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.991 [2024-09-28 08:44:34.799540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.991 [2024-09-28 08:44:34.799577] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.991 Base_1 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.991 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.251 Base_2 00:06:57.251 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.251 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:57.251 08:44:34 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:57.251 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.251 08:44:34 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.251 [2024-09-28 08:44:34.999497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.251 [2024-09-28 08:44:35.001559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.251 [2024-09-28 08:44:35.001672] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.251 [2024-09-28 08:44:35.001716] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:57.251 [2024-09-28 08:44:35.001968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.251 [2024-09-28 08:44:35.002133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.251 [2024-09-28 08:44:35.002174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.251 [2024-09-28 08:44:35.002338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.251 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.252 [2024-09-28 08:44:35.011415] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.252 [2024-09-28 08:44:35.011483] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:57.252 true 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.252 [2024-09-28 08:44:35.027524] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.252 [2024-09-28 08:44:35.071302] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.252 [2024-09-28 08:44:35.071325] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:57.252 [2024-09-28 08:44:35.071354] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:57.252 true 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.252 [2024-09-28 08:44:35.087444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60564 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60564 ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60564 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60564 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60564' 00:06:57.252 killing process with pid 60564 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60564 00:06:57.252 [2024-09-28 08:44:35.166695] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.252 08:44:35 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60564 00:06:57.252 [2024-09-28 08:44:35.166830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.252 [2024-09-28 08:44:35.166890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.252 [2024-09-28 08:44:35.166900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:57.252 [2024-09-28 08:44:35.184280] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.660 08:44:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:58.660 00:06:58.660 real 0m2.470s 00:06:58.660 user 0m2.475s 00:06:58.660 sys 0m0.466s 00:06:58.660 ************************************ 00:06:58.660 END TEST raid0_resize_test 00:06:58.660 ************************************ 00:06:58.660 08:44:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.660 08:44:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.660 08:44:36 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:58.660 08:44:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:58.660 08:44:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.660 08:44:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.660 ************************************ 00:06:58.660 START TEST raid1_resize_test 00:06:58.660 ************************************ 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60621 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60621' 00:06:58.660 Process raid pid: 60621 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60621 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60621 ']' 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.660 08:44:36 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.919 [2024-09-28 08:44:36.692927] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:58.919 [2024-09-28 08:44:36.693137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.919 [2024-09-28 08:44:36.863029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.179 [2024-09-28 08:44:37.125514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.438 [2024-09-28 08:44:37.366996] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.438 [2024-09-28 08:44:37.367057] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 Base_1 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 Base_2 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-09-28 08:44:37.554865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:59.698 [2024-09-28 08:44:37.556918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:59.698 [2024-09-28 08:44:37.556987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:59.698 [2024-09-28 08:44:37.556999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:59.698 [2024-09-28 08:44:37.557240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:59.698 [2024-09-28 08:44:37.557368] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:59.698 [2024-09-28 08:44:37.557377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:59.698 [2024-09-28 08:44:37.557528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-09-28 08:44:37.566783] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.698 [2024-09-28 08:44:37.566846] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:59.698 true 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-09-28 08:44:37.582910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-09-28 08:44:37.630688] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.698 [2024-09-28 08:44:37.630746] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:59.698 [2024-09-28 08:44:37.630803] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:59.698 true 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.698 [2024-09-28 08:44:37.646810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60621 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60621 ']' 00:06:59.698 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60621 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60621 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.958 killing process with pid 60621 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60621' 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60621 00:06:59.958 [2024-09-28 08:44:37.749988] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.958 [2024-09-28 08:44:37.750084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.958 08:44:37 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60621 00:06:59.958 [2024-09-28 08:44:37.750607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.958 [2024-09-28 08:44:37.750626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:59.958 [2024-09-28 08:44:37.768681] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.338 08:44:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:01.338 ************************************ 00:07:01.338 END TEST raid1_resize_test 00:07:01.338 ************************************ 00:07:01.338 00:07:01.338 real 0m2.525s 00:07:01.338 user 0m2.562s 00:07:01.338 sys 0m0.459s 00:07:01.338 08:44:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.338 08:44:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.338 08:44:39 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:01.338 08:44:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:01.338 08:44:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:01.338 08:44:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:01.338 08:44:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.338 08:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.338 ************************************ 00:07:01.338 START TEST raid_state_function_test 00:07:01.338 ************************************ 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60686 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60686' 00:07:01.338 Process raid pid: 60686 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60686 00:07:01.338 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60686 ']' 00:07:01.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.339 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.339 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.339 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.339 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.339 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.339 [2024-09-28 08:44:39.292585] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:01.339 [2024-09-28 08:44:39.292750] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.598 [2024-09-28 08:44:39.463295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.857 [2024-09-28 08:44:39.707394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.116 [2024-09-28 08:44:39.943214] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.116 [2024-09-28 08:44:39.943323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.375 [2024-09-28 08:44:40.126443] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.375 [2024-09-28 08:44:40.126508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.375 [2024-09-28 08:44:40.126519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.375 [2024-09-28 08:44:40.126546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.375 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.375 "name": "Existed_Raid", 00:07:02.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.375 "strip_size_kb": 64, 00:07:02.375 "state": "configuring", 00:07:02.375 "raid_level": "raid0", 00:07:02.375 "superblock": false, 00:07:02.375 "num_base_bdevs": 2, 00:07:02.375 "num_base_bdevs_discovered": 0, 00:07:02.375 "num_base_bdevs_operational": 2, 00:07:02.375 "base_bdevs_list": [ 00:07:02.375 { 00:07:02.375 "name": "BaseBdev1", 00:07:02.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.375 "is_configured": false, 00:07:02.375 "data_offset": 0, 00:07:02.375 "data_size": 0 00:07:02.375 }, 00:07:02.375 { 00:07:02.375 "name": "BaseBdev2", 00:07:02.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.376 "is_configured": false, 00:07:02.376 "data_offset": 0, 00:07:02.376 "data_size": 0 00:07:02.376 } 00:07:02.376 ] 00:07:02.376 }' 00:07:02.376 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.376 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:02.634 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.634 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.634 [2024-09-28 08:44:40.581562] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:02.634 [2024-09-28 08:44:40.581664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:02.634 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.634 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.635 [2024-09-28 08:44:40.589561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:02.635 [2024-09-28 08:44:40.589673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:02.635 [2024-09-28 08:44:40.589710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:02.635 [2024-09-28 08:44:40.589740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.635 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.894 [2024-09-28 08:44:40.657565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:02.894 BaseBdev1 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.894 [ 00:07:02.894 { 00:07:02.894 "name": "BaseBdev1", 00:07:02.894 "aliases": [ 00:07:02.894 "18165828-908d-42f6-9ddc-da4ac891cd69" 00:07:02.894 ], 00:07:02.894 "product_name": "Malloc disk", 00:07:02.894 "block_size": 512, 00:07:02.894 "num_blocks": 65536, 00:07:02.894 "uuid": "18165828-908d-42f6-9ddc-da4ac891cd69", 00:07:02.894 "assigned_rate_limits": { 00:07:02.894 "rw_ios_per_sec": 0, 00:07:02.894 "rw_mbytes_per_sec": 0, 00:07:02.894 "r_mbytes_per_sec": 0, 00:07:02.894 "w_mbytes_per_sec": 0 00:07:02.894 }, 00:07:02.894 "claimed": true, 00:07:02.894 "claim_type": "exclusive_write", 00:07:02.894 "zoned": false, 00:07:02.894 "supported_io_types": { 00:07:02.894 "read": true, 00:07:02.894 "write": true, 00:07:02.894 "unmap": true, 00:07:02.894 "flush": true, 00:07:02.894 "reset": true, 00:07:02.894 "nvme_admin": false, 00:07:02.894 "nvme_io": false, 00:07:02.894 "nvme_io_md": false, 00:07:02.894 "write_zeroes": true, 00:07:02.894 "zcopy": true, 00:07:02.894 "get_zone_info": false, 00:07:02.894 "zone_management": false, 00:07:02.894 "zone_append": false, 00:07:02.894 "compare": false, 00:07:02.894 "compare_and_write": false, 00:07:02.894 "abort": true, 00:07:02.894 "seek_hole": false, 00:07:02.894 "seek_data": false, 00:07:02.894 "copy": true, 00:07:02.894 "nvme_iov_md": false 00:07:02.894 }, 00:07:02.894 "memory_domains": [ 00:07:02.894 { 00:07:02.894 "dma_device_id": "system", 00:07:02.894 "dma_device_type": 1 00:07:02.894 }, 00:07:02.894 { 00:07:02.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.894 "dma_device_type": 2 00:07:02.894 } 00:07:02.894 ], 00:07:02.894 "driver_specific": {} 00:07:02.894 } 00:07:02.894 ] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.894 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.894 "name": "Existed_Raid", 00:07:02.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.894 "strip_size_kb": 64, 00:07:02.894 "state": "configuring", 00:07:02.894 "raid_level": "raid0", 00:07:02.894 "superblock": false, 00:07:02.894 "num_base_bdevs": 2, 00:07:02.894 "num_base_bdevs_discovered": 1, 00:07:02.894 "num_base_bdevs_operational": 2, 00:07:02.894 "base_bdevs_list": [ 00:07:02.894 { 00:07:02.894 "name": "BaseBdev1", 00:07:02.894 "uuid": "18165828-908d-42f6-9ddc-da4ac891cd69", 00:07:02.894 "is_configured": true, 00:07:02.894 "data_offset": 0, 00:07:02.894 "data_size": 65536 00:07:02.894 }, 00:07:02.894 { 00:07:02.894 "name": "BaseBdev2", 00:07:02.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:02.895 "is_configured": false, 00:07:02.895 "data_offset": 0, 00:07:02.895 "data_size": 0 00:07:02.895 } 00:07:02.895 ] 00:07:02.895 }' 00:07:02.895 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.895 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.463 [2024-09-28 08:44:41.168754] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.463 [2024-09-28 08:44:41.168855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.463 [2024-09-28 08:44:41.176778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.463 [2024-09-28 08:44:41.178910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.463 [2024-09-28 08:44:41.179004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.463 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.463 "name": "Existed_Raid", 00:07:03.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.463 "strip_size_kb": 64, 00:07:03.463 "state": "configuring", 00:07:03.463 "raid_level": "raid0", 00:07:03.463 "superblock": false, 00:07:03.463 "num_base_bdevs": 2, 00:07:03.463 "num_base_bdevs_discovered": 1, 00:07:03.463 "num_base_bdevs_operational": 2, 00:07:03.463 "base_bdevs_list": [ 00:07:03.463 { 00:07:03.463 "name": "BaseBdev1", 00:07:03.463 "uuid": "18165828-908d-42f6-9ddc-da4ac891cd69", 00:07:03.463 "is_configured": true, 00:07:03.463 "data_offset": 0, 00:07:03.463 "data_size": 65536 00:07:03.463 }, 00:07:03.463 { 00:07:03.463 "name": "BaseBdev2", 00:07:03.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.463 "is_configured": false, 00:07:03.464 "data_offset": 0, 00:07:03.464 "data_size": 0 00:07:03.464 } 00:07:03.464 ] 00:07:03.464 }' 00:07:03.464 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.464 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.723 [2024-09-28 08:44:41.645118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.723 [2024-09-28 08:44:41.645244] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:03.723 [2024-09-28 08:44:41.645260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.723 [2024-09-28 08:44:41.645599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:03.723 [2024-09-28 08:44:41.645814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:03.723 [2024-09-28 08:44:41.645836] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:03.723 [2024-09-28 08:44:41.646193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.723 BaseBdev2 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:03.723 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.724 [ 00:07:03.724 { 00:07:03.724 "name": "BaseBdev2", 00:07:03.724 "aliases": [ 00:07:03.724 "504e6be0-c721-4bcc-8515-e5437ed661dd" 00:07:03.724 ], 00:07:03.724 "product_name": "Malloc disk", 00:07:03.724 "block_size": 512, 00:07:03.724 "num_blocks": 65536, 00:07:03.724 "uuid": "504e6be0-c721-4bcc-8515-e5437ed661dd", 00:07:03.724 "assigned_rate_limits": { 00:07:03.724 "rw_ios_per_sec": 0, 00:07:03.724 "rw_mbytes_per_sec": 0, 00:07:03.724 "r_mbytes_per_sec": 0, 00:07:03.724 "w_mbytes_per_sec": 0 00:07:03.724 }, 00:07:03.724 "claimed": true, 00:07:03.724 "claim_type": "exclusive_write", 00:07:03.724 "zoned": false, 00:07:03.724 "supported_io_types": { 00:07:03.724 "read": true, 00:07:03.724 "write": true, 00:07:03.724 "unmap": true, 00:07:03.724 "flush": true, 00:07:03.724 "reset": true, 00:07:03.724 "nvme_admin": false, 00:07:03.724 "nvme_io": false, 00:07:03.724 "nvme_io_md": false, 00:07:03.724 "write_zeroes": true, 00:07:03.724 "zcopy": true, 00:07:03.724 "get_zone_info": false, 00:07:03.724 "zone_management": false, 00:07:03.724 "zone_append": false, 00:07:03.724 "compare": false, 00:07:03.724 "compare_and_write": false, 00:07:03.724 "abort": true, 00:07:03.724 "seek_hole": false, 00:07:03.724 "seek_data": false, 00:07:03.724 "copy": true, 00:07:03.724 "nvme_iov_md": false 00:07:03.724 }, 00:07:03.724 "memory_domains": [ 00:07:03.724 { 00:07:03.724 "dma_device_id": "system", 00:07:03.724 "dma_device_type": 1 00:07:03.724 }, 00:07:03.724 { 00:07:03.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.724 "dma_device_type": 2 00:07:03.724 } 00:07:03.724 ], 00:07:03.724 "driver_specific": {} 00:07:03.724 } 00:07:03.724 ] 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.724 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.983 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.983 "name": "Existed_Raid", 00:07:03.983 "uuid": "39eb18c8-9033-4aec-b0ff-07ce94440767", 00:07:03.983 "strip_size_kb": 64, 00:07:03.983 "state": "online", 00:07:03.983 "raid_level": "raid0", 00:07:03.983 "superblock": false, 00:07:03.983 "num_base_bdevs": 2, 00:07:03.983 "num_base_bdevs_discovered": 2, 00:07:03.983 "num_base_bdevs_operational": 2, 00:07:03.983 "base_bdevs_list": [ 00:07:03.983 { 00:07:03.983 "name": "BaseBdev1", 00:07:03.983 "uuid": "18165828-908d-42f6-9ddc-da4ac891cd69", 00:07:03.983 "is_configured": true, 00:07:03.983 "data_offset": 0, 00:07:03.983 "data_size": 65536 00:07:03.983 }, 00:07:03.983 { 00:07:03.983 "name": "BaseBdev2", 00:07:03.983 "uuid": "504e6be0-c721-4bcc-8515-e5437ed661dd", 00:07:03.983 "is_configured": true, 00:07:03.983 "data_offset": 0, 00:07:03.983 "data_size": 65536 00:07:03.983 } 00:07:03.983 ] 00:07:03.983 }' 00:07:03.983 08:44:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.983 08:44:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.243 [2024-09-28 08:44:42.088764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.243 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:04.243 "name": "Existed_Raid", 00:07:04.243 "aliases": [ 00:07:04.243 "39eb18c8-9033-4aec-b0ff-07ce94440767" 00:07:04.243 ], 00:07:04.243 "product_name": "Raid Volume", 00:07:04.243 "block_size": 512, 00:07:04.243 "num_blocks": 131072, 00:07:04.243 "uuid": "39eb18c8-9033-4aec-b0ff-07ce94440767", 00:07:04.243 "assigned_rate_limits": { 00:07:04.244 "rw_ios_per_sec": 0, 00:07:04.244 "rw_mbytes_per_sec": 0, 00:07:04.244 "r_mbytes_per_sec": 0, 00:07:04.244 "w_mbytes_per_sec": 0 00:07:04.244 }, 00:07:04.244 "claimed": false, 00:07:04.244 "zoned": false, 00:07:04.244 "supported_io_types": { 00:07:04.244 "read": true, 00:07:04.244 "write": true, 00:07:04.244 "unmap": true, 00:07:04.244 "flush": true, 00:07:04.244 "reset": true, 00:07:04.244 "nvme_admin": false, 00:07:04.244 "nvme_io": false, 00:07:04.244 "nvme_io_md": false, 00:07:04.244 "write_zeroes": true, 00:07:04.244 "zcopy": false, 00:07:04.244 "get_zone_info": false, 00:07:04.244 "zone_management": false, 00:07:04.244 "zone_append": false, 00:07:04.244 "compare": false, 00:07:04.244 "compare_and_write": false, 00:07:04.244 "abort": false, 00:07:04.244 "seek_hole": false, 00:07:04.244 "seek_data": false, 00:07:04.244 "copy": false, 00:07:04.244 "nvme_iov_md": false 00:07:04.244 }, 00:07:04.244 "memory_domains": [ 00:07:04.244 { 00:07:04.244 "dma_device_id": "system", 00:07:04.244 "dma_device_type": 1 00:07:04.244 }, 00:07:04.244 { 00:07:04.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.244 "dma_device_type": 2 00:07:04.244 }, 00:07:04.244 { 00:07:04.244 "dma_device_id": "system", 00:07:04.244 "dma_device_type": 1 00:07:04.244 }, 00:07:04.244 { 00:07:04.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.244 "dma_device_type": 2 00:07:04.244 } 00:07:04.244 ], 00:07:04.244 "driver_specific": { 00:07:04.244 "raid": { 00:07:04.244 "uuid": "39eb18c8-9033-4aec-b0ff-07ce94440767", 00:07:04.244 "strip_size_kb": 64, 00:07:04.244 "state": "online", 00:07:04.244 "raid_level": "raid0", 00:07:04.244 "superblock": false, 00:07:04.244 "num_base_bdevs": 2, 00:07:04.244 "num_base_bdevs_discovered": 2, 00:07:04.244 "num_base_bdevs_operational": 2, 00:07:04.244 "base_bdevs_list": [ 00:07:04.244 { 00:07:04.244 "name": "BaseBdev1", 00:07:04.244 "uuid": "18165828-908d-42f6-9ddc-da4ac891cd69", 00:07:04.244 "is_configured": true, 00:07:04.244 "data_offset": 0, 00:07:04.244 "data_size": 65536 00:07:04.244 }, 00:07:04.244 { 00:07:04.244 "name": "BaseBdev2", 00:07:04.244 "uuid": "504e6be0-c721-4bcc-8515-e5437ed661dd", 00:07:04.244 "is_configured": true, 00:07:04.244 "data_offset": 0, 00:07:04.244 "data_size": 65536 00:07:04.244 } 00:07:04.244 ] 00:07:04.244 } 00:07:04.244 } 00:07:04.244 }' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:04.244 BaseBdev2' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.244 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.504 [2024-09-28 08:44:42.308144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:04.504 [2024-09-28 08:44:42.308183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:04.504 [2024-09-28 08:44:42.308249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.504 "name": "Existed_Raid", 00:07:04.504 "uuid": "39eb18c8-9033-4aec-b0ff-07ce94440767", 00:07:04.504 "strip_size_kb": 64, 00:07:04.504 "state": "offline", 00:07:04.504 "raid_level": "raid0", 00:07:04.504 "superblock": false, 00:07:04.504 "num_base_bdevs": 2, 00:07:04.504 "num_base_bdevs_discovered": 1, 00:07:04.504 "num_base_bdevs_operational": 1, 00:07:04.504 "base_bdevs_list": [ 00:07:04.504 { 00:07:04.504 "name": null, 00:07:04.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.504 "is_configured": false, 00:07:04.504 "data_offset": 0, 00:07:04.504 "data_size": 65536 00:07:04.504 }, 00:07:04.504 { 00:07:04.504 "name": "BaseBdev2", 00:07:04.504 "uuid": "504e6be0-c721-4bcc-8515-e5437ed661dd", 00:07:04.504 "is_configured": true, 00:07:04.504 "data_offset": 0, 00:07:04.504 "data_size": 65536 00:07:04.504 } 00:07:04.504 ] 00:07:04.504 }' 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.504 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.073 [2024-09-28 08:44:42.879703] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.073 [2024-09-28 08:44:42.879816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.073 08:44:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60686 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60686 ']' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60686 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60686 00:07:05.073 killing process with pid 60686 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60686' 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60686 00:07:05.073 [2024-09-28 08:44:43.053674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.073 08:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60686 00:07:05.332 [2024-09-28 08:44:43.072341] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.711 00:07:06.711 real 0m5.266s 00:07:06.711 user 0m7.342s 00:07:06.711 sys 0m0.900s 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.711 ************************************ 00:07:06.711 END TEST raid_state_function_test 00:07:06.711 ************************************ 00:07:06.711 08:44:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:06.711 08:44:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:06.711 08:44:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.711 08:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.711 ************************************ 00:07:06.711 START TEST raid_state_function_test_sb 00:07:06.711 ************************************ 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60942 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60942' 00:07:06.711 Process raid pid: 60942 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60942 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60942 ']' 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.711 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.711 [2024-09-28 08:44:44.625756] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:06.711 [2024-09-28 08:44:44.625990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.970 [2024-09-28 08:44:44.795186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.230 [2024-09-28 08:44:45.054124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.490 [2024-09-28 08:44:45.297650] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.490 [2024-09-28 08:44:45.297768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.490 [2024-09-28 08:44:45.468391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.490 [2024-09-28 08:44:45.468452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.490 [2024-09-28 08:44:45.468462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.490 [2024-09-28 08:44:45.468472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.490 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.749 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.749 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.749 "name": "Existed_Raid", 00:07:07.749 "uuid": "27c3dc4d-9826-4905-a792-e58f5be8bb4c", 00:07:07.749 "strip_size_kb": 64, 00:07:07.749 "state": "configuring", 00:07:07.749 "raid_level": "raid0", 00:07:07.749 "superblock": true, 00:07:07.749 "num_base_bdevs": 2, 00:07:07.749 "num_base_bdevs_discovered": 0, 00:07:07.749 "num_base_bdevs_operational": 2, 00:07:07.749 "base_bdevs_list": [ 00:07:07.749 { 00:07:07.749 "name": "BaseBdev1", 00:07:07.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.749 "is_configured": false, 00:07:07.749 "data_offset": 0, 00:07:07.749 "data_size": 0 00:07:07.749 }, 00:07:07.749 { 00:07:07.749 "name": "BaseBdev2", 00:07:07.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.749 "is_configured": false, 00:07:07.749 "data_offset": 0, 00:07:07.749 "data_size": 0 00:07:07.749 } 00:07:07.749 ] 00:07:07.749 }' 00:07:07.749 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.749 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 [2024-09-28 08:44:45.867604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.010 [2024-09-28 08:44:45.867716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 [2024-09-28 08:44:45.875614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.010 [2024-09-28 08:44:45.875713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.010 [2024-09-28 08:44:45.875746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.010 [2024-09-28 08:44:45.875773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 [2024-09-28 08:44:45.937601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.010 BaseBdev1 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 [ 00:07:08.010 { 00:07:08.010 "name": "BaseBdev1", 00:07:08.010 "aliases": [ 00:07:08.010 "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d" 00:07:08.010 ], 00:07:08.010 "product_name": "Malloc disk", 00:07:08.010 "block_size": 512, 00:07:08.010 "num_blocks": 65536, 00:07:08.010 "uuid": "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d", 00:07:08.010 "assigned_rate_limits": { 00:07:08.010 "rw_ios_per_sec": 0, 00:07:08.010 "rw_mbytes_per_sec": 0, 00:07:08.010 "r_mbytes_per_sec": 0, 00:07:08.010 "w_mbytes_per_sec": 0 00:07:08.010 }, 00:07:08.010 "claimed": true, 00:07:08.010 "claim_type": "exclusive_write", 00:07:08.010 "zoned": false, 00:07:08.010 "supported_io_types": { 00:07:08.010 "read": true, 00:07:08.010 "write": true, 00:07:08.010 "unmap": true, 00:07:08.010 "flush": true, 00:07:08.010 "reset": true, 00:07:08.010 "nvme_admin": false, 00:07:08.010 "nvme_io": false, 00:07:08.010 "nvme_io_md": false, 00:07:08.010 "write_zeroes": true, 00:07:08.010 "zcopy": true, 00:07:08.010 "get_zone_info": false, 00:07:08.010 "zone_management": false, 00:07:08.010 "zone_append": false, 00:07:08.010 "compare": false, 00:07:08.010 "compare_and_write": false, 00:07:08.010 "abort": true, 00:07:08.010 "seek_hole": false, 00:07:08.010 "seek_data": false, 00:07:08.010 "copy": true, 00:07:08.010 "nvme_iov_md": false 00:07:08.010 }, 00:07:08.010 "memory_domains": [ 00:07:08.010 { 00:07:08.010 "dma_device_id": "system", 00:07:08.010 "dma_device_type": 1 00:07:08.010 }, 00:07:08.010 { 00:07:08.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.010 "dma_device_type": 2 00:07:08.010 } 00:07:08.010 ], 00:07:08.010 "driver_specific": {} 00:07:08.010 } 00:07:08.010 ] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.010 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.270 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.270 "name": "Existed_Raid", 00:07:08.270 "uuid": "2755d148-024a-49a4-a256-ed98e0ceb43f", 00:07:08.270 "strip_size_kb": 64, 00:07:08.270 "state": "configuring", 00:07:08.270 "raid_level": "raid0", 00:07:08.270 "superblock": true, 00:07:08.270 "num_base_bdevs": 2, 00:07:08.270 "num_base_bdevs_discovered": 1, 00:07:08.270 "num_base_bdevs_operational": 2, 00:07:08.270 "base_bdevs_list": [ 00:07:08.270 { 00:07:08.270 "name": "BaseBdev1", 00:07:08.270 "uuid": "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d", 00:07:08.270 "is_configured": true, 00:07:08.270 "data_offset": 2048, 00:07:08.270 "data_size": 63488 00:07:08.270 }, 00:07:08.270 { 00:07:08.270 "name": "BaseBdev2", 00:07:08.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.270 "is_configured": false, 00:07:08.270 "data_offset": 0, 00:07:08.270 "data_size": 0 00:07:08.270 } 00:07:08.270 ] 00:07:08.270 }' 00:07:08.270 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.270 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.530 [2024-09-28 08:44:46.368898] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.530 [2024-09-28 08:44:46.368958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.530 [2024-09-28 08:44:46.376919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.530 [2024-09-28 08:44:46.379005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.530 [2024-09-28 08:44:46.379118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.530 "name": "Existed_Raid", 00:07:08.530 "uuid": "48e06658-fd7f-4283-bf14-25b7b13f5769", 00:07:08.530 "strip_size_kb": 64, 00:07:08.530 "state": "configuring", 00:07:08.530 "raid_level": "raid0", 00:07:08.530 "superblock": true, 00:07:08.530 "num_base_bdevs": 2, 00:07:08.530 "num_base_bdevs_discovered": 1, 00:07:08.530 "num_base_bdevs_operational": 2, 00:07:08.530 "base_bdevs_list": [ 00:07:08.530 { 00:07:08.530 "name": "BaseBdev1", 00:07:08.530 "uuid": "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d", 00:07:08.530 "is_configured": true, 00:07:08.530 "data_offset": 2048, 00:07:08.530 "data_size": 63488 00:07:08.530 }, 00:07:08.530 { 00:07:08.530 "name": "BaseBdev2", 00:07:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.530 "is_configured": false, 00:07:08.530 "data_offset": 0, 00:07:08.530 "data_size": 0 00:07:08.530 } 00:07:08.530 ] 00:07:08.530 }' 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.530 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.099 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:09.099 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.099 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.099 [2024-09-28 08:44:46.835637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.099 [2024-09-28 08:44:46.836087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.099 [2024-09-28 08:44:46.836152] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.099 [2024-09-28 08:44:46.836529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.099 BaseBdev2 00:07:09.099 [2024-09-28 08:44:46.836745] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.099 [2024-09-28 08:44:46.836791] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:09.099 [2024-09-28 08:44:46.836980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.099 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.100 [ 00:07:09.100 { 00:07:09.100 "name": "BaseBdev2", 00:07:09.100 "aliases": [ 00:07:09.100 "70fcb97c-3803-4397-85a0-c1ff4087533b" 00:07:09.100 ], 00:07:09.100 "product_name": "Malloc disk", 00:07:09.100 "block_size": 512, 00:07:09.100 "num_blocks": 65536, 00:07:09.100 "uuid": "70fcb97c-3803-4397-85a0-c1ff4087533b", 00:07:09.100 "assigned_rate_limits": { 00:07:09.100 "rw_ios_per_sec": 0, 00:07:09.100 "rw_mbytes_per_sec": 0, 00:07:09.100 "r_mbytes_per_sec": 0, 00:07:09.100 "w_mbytes_per_sec": 0 00:07:09.100 }, 00:07:09.100 "claimed": true, 00:07:09.100 "claim_type": "exclusive_write", 00:07:09.100 "zoned": false, 00:07:09.100 "supported_io_types": { 00:07:09.100 "read": true, 00:07:09.100 "write": true, 00:07:09.100 "unmap": true, 00:07:09.100 "flush": true, 00:07:09.100 "reset": true, 00:07:09.100 "nvme_admin": false, 00:07:09.100 "nvme_io": false, 00:07:09.100 "nvme_io_md": false, 00:07:09.100 "write_zeroes": true, 00:07:09.100 "zcopy": true, 00:07:09.100 "get_zone_info": false, 00:07:09.100 "zone_management": false, 00:07:09.100 "zone_append": false, 00:07:09.100 "compare": false, 00:07:09.100 "compare_and_write": false, 00:07:09.100 "abort": true, 00:07:09.100 "seek_hole": false, 00:07:09.100 "seek_data": false, 00:07:09.100 "copy": true, 00:07:09.100 "nvme_iov_md": false 00:07:09.100 }, 00:07:09.100 "memory_domains": [ 00:07:09.100 { 00:07:09.100 "dma_device_id": "system", 00:07:09.100 "dma_device_type": 1 00:07:09.100 }, 00:07:09.100 { 00:07:09.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.100 "dma_device_type": 2 00:07:09.100 } 00:07:09.100 ], 00:07:09.100 "driver_specific": {} 00:07:09.100 } 00:07:09.100 ] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.100 "name": "Existed_Raid", 00:07:09.100 "uuid": "48e06658-fd7f-4283-bf14-25b7b13f5769", 00:07:09.100 "strip_size_kb": 64, 00:07:09.100 "state": "online", 00:07:09.100 "raid_level": "raid0", 00:07:09.100 "superblock": true, 00:07:09.100 "num_base_bdevs": 2, 00:07:09.100 "num_base_bdevs_discovered": 2, 00:07:09.100 "num_base_bdevs_operational": 2, 00:07:09.100 "base_bdevs_list": [ 00:07:09.100 { 00:07:09.100 "name": "BaseBdev1", 00:07:09.100 "uuid": "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d", 00:07:09.100 "is_configured": true, 00:07:09.100 "data_offset": 2048, 00:07:09.100 "data_size": 63488 00:07:09.100 }, 00:07:09.100 { 00:07:09.100 "name": "BaseBdev2", 00:07:09.100 "uuid": "70fcb97c-3803-4397-85a0-c1ff4087533b", 00:07:09.100 "is_configured": true, 00:07:09.100 "data_offset": 2048, 00:07:09.100 "data_size": 63488 00:07:09.100 } 00:07:09.100 ] 00:07:09.100 }' 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.100 08:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.360 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 [2024-09-28 08:44:47.263251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.361 "name": "Existed_Raid", 00:07:09.361 "aliases": [ 00:07:09.361 "48e06658-fd7f-4283-bf14-25b7b13f5769" 00:07:09.361 ], 00:07:09.361 "product_name": "Raid Volume", 00:07:09.361 "block_size": 512, 00:07:09.361 "num_blocks": 126976, 00:07:09.361 "uuid": "48e06658-fd7f-4283-bf14-25b7b13f5769", 00:07:09.361 "assigned_rate_limits": { 00:07:09.361 "rw_ios_per_sec": 0, 00:07:09.361 "rw_mbytes_per_sec": 0, 00:07:09.361 "r_mbytes_per_sec": 0, 00:07:09.361 "w_mbytes_per_sec": 0 00:07:09.361 }, 00:07:09.361 "claimed": false, 00:07:09.361 "zoned": false, 00:07:09.361 "supported_io_types": { 00:07:09.361 "read": true, 00:07:09.361 "write": true, 00:07:09.361 "unmap": true, 00:07:09.361 "flush": true, 00:07:09.361 "reset": true, 00:07:09.361 "nvme_admin": false, 00:07:09.361 "nvme_io": false, 00:07:09.361 "nvme_io_md": false, 00:07:09.361 "write_zeroes": true, 00:07:09.361 "zcopy": false, 00:07:09.361 "get_zone_info": false, 00:07:09.361 "zone_management": false, 00:07:09.361 "zone_append": false, 00:07:09.361 "compare": false, 00:07:09.361 "compare_and_write": false, 00:07:09.361 "abort": false, 00:07:09.361 "seek_hole": false, 00:07:09.361 "seek_data": false, 00:07:09.361 "copy": false, 00:07:09.361 "nvme_iov_md": false 00:07:09.361 }, 00:07:09.361 "memory_domains": [ 00:07:09.361 { 00:07:09.361 "dma_device_id": "system", 00:07:09.361 "dma_device_type": 1 00:07:09.361 }, 00:07:09.361 { 00:07:09.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.361 "dma_device_type": 2 00:07:09.361 }, 00:07:09.361 { 00:07:09.361 "dma_device_id": "system", 00:07:09.361 "dma_device_type": 1 00:07:09.361 }, 00:07:09.361 { 00:07:09.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.361 "dma_device_type": 2 00:07:09.361 } 00:07:09.361 ], 00:07:09.361 "driver_specific": { 00:07:09.361 "raid": { 00:07:09.361 "uuid": "48e06658-fd7f-4283-bf14-25b7b13f5769", 00:07:09.361 "strip_size_kb": 64, 00:07:09.361 "state": "online", 00:07:09.361 "raid_level": "raid0", 00:07:09.361 "superblock": true, 00:07:09.361 "num_base_bdevs": 2, 00:07:09.361 "num_base_bdevs_discovered": 2, 00:07:09.361 "num_base_bdevs_operational": 2, 00:07:09.361 "base_bdevs_list": [ 00:07:09.361 { 00:07:09.361 "name": "BaseBdev1", 00:07:09.361 "uuid": "cb273b0f-19dd-4fee-ab7f-2d1955e92e8d", 00:07:09.361 "is_configured": true, 00:07:09.361 "data_offset": 2048, 00:07:09.361 "data_size": 63488 00:07:09.361 }, 00:07:09.361 { 00:07:09.361 "name": "BaseBdev2", 00:07:09.361 "uuid": "70fcb97c-3803-4397-85a0-c1ff4087533b", 00:07:09.361 "is_configured": true, 00:07:09.361 "data_offset": 2048, 00:07:09.361 "data_size": 63488 00:07:09.361 } 00:07:09.361 ] 00:07:09.361 } 00:07:09.361 } 00:07:09.361 }' 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:09.361 BaseBdev2' 00:07:09.361 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.620 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.620 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.620 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:09.620 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 [2024-09-28 08:44:47.474661] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:09.621 [2024-09-28 08:44:47.474703] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.621 [2024-09-28 08:44:47.474767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.621 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.903 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.903 "name": "Existed_Raid", 00:07:09.903 "uuid": "48e06658-fd7f-4283-bf14-25b7b13f5769", 00:07:09.903 "strip_size_kb": 64, 00:07:09.903 "state": "offline", 00:07:09.903 "raid_level": "raid0", 00:07:09.903 "superblock": true, 00:07:09.903 "num_base_bdevs": 2, 00:07:09.903 "num_base_bdevs_discovered": 1, 00:07:09.903 "num_base_bdevs_operational": 1, 00:07:09.903 "base_bdevs_list": [ 00:07:09.903 { 00:07:09.903 "name": null, 00:07:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.903 "is_configured": false, 00:07:09.903 "data_offset": 0, 00:07:09.903 "data_size": 63488 00:07:09.903 }, 00:07:09.903 { 00:07:09.903 "name": "BaseBdev2", 00:07:09.903 "uuid": "70fcb97c-3803-4397-85a0-c1ff4087533b", 00:07:09.903 "is_configured": true, 00:07:09.903 "data_offset": 2048, 00:07:09.903 "data_size": 63488 00:07:09.903 } 00:07:09.903 ] 00:07:09.903 }' 00:07:09.903 08:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.903 08:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.171 [2024-09-28 08:44:48.056968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.171 [2024-09-28 08:44:48.057029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.171 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60942 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60942 ']' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60942 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60942 00:07:10.434 killing process with pid 60942 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60942' 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60942 00:07:10.434 [2024-09-28 08:44:48.253758] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.434 08:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60942 00:07:10.434 [2024-09-28 08:44:48.271388] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.814 08:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:11.814 ************************************ 00:07:11.814 END TEST raid_state_function_test_sb 00:07:11.814 ************************************ 00:07:11.814 00:07:11.814 real 0m5.084s 00:07:11.814 user 0m7.028s 00:07:11.814 sys 0m0.916s 00:07:11.814 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.814 08:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.814 08:44:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:11.814 08:44:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:11.814 08:44:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.814 08:44:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.814 ************************************ 00:07:11.814 START TEST raid_superblock_test 00:07:11.814 ************************************ 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61194 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61194 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61194 ']' 00:07:11.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.814 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.814 [2024-09-28 08:44:49.782837] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:11.814 [2024-09-28 08:44:49.782996] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61194 ] 00:07:12.073 [2024-09-28 08:44:49.950274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.331 [2024-09-28 08:44:50.193024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.589 [2024-09-28 08:44:50.425522] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.589 [2024-09-28 08:44:50.425556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.848 malloc1 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.848 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-09-28 08:44:50.668758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:12.849 [2024-09-28 08:44:50.668872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.849 [2024-09-28 08:44:50.668916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:12.849 [2024-09-28 08:44:50.668948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.849 [2024-09-28 08:44:50.671353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.849 [2024-09-28 08:44:50.671434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:12.849 pt1 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 malloc2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-09-28 08:44:50.759417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:12.849 [2024-09-28 08:44:50.759522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.849 [2024-09-28 08:44:50.759565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:12.849 [2024-09-28 08:44:50.759592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.849 [2024-09-28 08:44:50.761991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.849 [2024-09-28 08:44:50.762055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:12.849 pt2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 [2024-09-28 08:44:50.771487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:12.849 [2024-09-28 08:44:50.773584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:12.849 [2024-09-28 08:44:50.773771] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.849 [2024-09-28 08:44:50.773791] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.849 [2024-09-28 08:44:50.774035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.849 [2024-09-28 08:44:50.774178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.849 [2024-09-28 08:44:50.774190] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:12.849 [2024-09-28 08:44:50.774321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.849 "name": "raid_bdev1", 00:07:12.849 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:12.849 "strip_size_kb": 64, 00:07:12.849 "state": "online", 00:07:12.849 "raid_level": "raid0", 00:07:12.849 "superblock": true, 00:07:12.849 "num_base_bdevs": 2, 00:07:12.849 "num_base_bdevs_discovered": 2, 00:07:12.849 "num_base_bdevs_operational": 2, 00:07:12.849 "base_bdevs_list": [ 00:07:12.849 { 00:07:12.849 "name": "pt1", 00:07:12.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:12.849 "is_configured": true, 00:07:12.849 "data_offset": 2048, 00:07:12.849 "data_size": 63488 00:07:12.849 }, 00:07:12.849 { 00:07:12.849 "name": "pt2", 00:07:12.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:12.849 "is_configured": true, 00:07:12.849 "data_offset": 2048, 00:07:12.849 "data_size": 63488 00:07:12.849 } 00:07:12.849 ] 00:07:12.849 }' 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.849 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.417 [2024-09-28 08:44:51.215008] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.417 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.417 "name": "raid_bdev1", 00:07:13.417 "aliases": [ 00:07:13.417 "fa75aff7-7fd2-4612-904d-2dc2f628b026" 00:07:13.417 ], 00:07:13.417 "product_name": "Raid Volume", 00:07:13.417 "block_size": 512, 00:07:13.417 "num_blocks": 126976, 00:07:13.417 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:13.417 "assigned_rate_limits": { 00:07:13.417 "rw_ios_per_sec": 0, 00:07:13.417 "rw_mbytes_per_sec": 0, 00:07:13.417 "r_mbytes_per_sec": 0, 00:07:13.417 "w_mbytes_per_sec": 0 00:07:13.417 }, 00:07:13.417 "claimed": false, 00:07:13.417 "zoned": false, 00:07:13.417 "supported_io_types": { 00:07:13.417 "read": true, 00:07:13.417 "write": true, 00:07:13.417 "unmap": true, 00:07:13.417 "flush": true, 00:07:13.417 "reset": true, 00:07:13.417 "nvme_admin": false, 00:07:13.417 "nvme_io": false, 00:07:13.417 "nvme_io_md": false, 00:07:13.417 "write_zeroes": true, 00:07:13.417 "zcopy": false, 00:07:13.417 "get_zone_info": false, 00:07:13.417 "zone_management": false, 00:07:13.417 "zone_append": false, 00:07:13.417 "compare": false, 00:07:13.417 "compare_and_write": false, 00:07:13.417 "abort": false, 00:07:13.417 "seek_hole": false, 00:07:13.417 "seek_data": false, 00:07:13.417 "copy": false, 00:07:13.417 "nvme_iov_md": false 00:07:13.417 }, 00:07:13.417 "memory_domains": [ 00:07:13.417 { 00:07:13.417 "dma_device_id": "system", 00:07:13.417 "dma_device_type": 1 00:07:13.417 }, 00:07:13.417 { 00:07:13.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.417 "dma_device_type": 2 00:07:13.417 }, 00:07:13.417 { 00:07:13.417 "dma_device_id": "system", 00:07:13.417 "dma_device_type": 1 00:07:13.417 }, 00:07:13.417 { 00:07:13.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.418 "dma_device_type": 2 00:07:13.418 } 00:07:13.418 ], 00:07:13.418 "driver_specific": { 00:07:13.418 "raid": { 00:07:13.418 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:13.418 "strip_size_kb": 64, 00:07:13.418 "state": "online", 00:07:13.418 "raid_level": "raid0", 00:07:13.418 "superblock": true, 00:07:13.418 "num_base_bdevs": 2, 00:07:13.418 "num_base_bdevs_discovered": 2, 00:07:13.418 "num_base_bdevs_operational": 2, 00:07:13.418 "base_bdevs_list": [ 00:07:13.418 { 00:07:13.418 "name": "pt1", 00:07:13.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.418 "is_configured": true, 00:07:13.418 "data_offset": 2048, 00:07:13.418 "data_size": 63488 00:07:13.418 }, 00:07:13.418 { 00:07:13.418 "name": "pt2", 00:07:13.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.418 "is_configured": true, 00:07:13.418 "data_offset": 2048, 00:07:13.418 "data_size": 63488 00:07:13.418 } 00:07:13.418 ] 00:07:13.418 } 00:07:13.418 } 00:07:13.418 }' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:13.418 pt2' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.418 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 [2024-09-28 08:44:51.446569] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fa75aff7-7fd2-4612-904d-2dc2f628b026 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fa75aff7-7fd2-4612-904d-2dc2f628b026 ']' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 [2024-09-28 08:44:51.494254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.678 [2024-09-28 08:44:51.494283] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.678 [2024-09-28 08:44:51.494362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.678 [2024-09-28 08:44:51.494408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.678 [2024-09-28 08:44:51.494420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 [2024-09-28 08:44:51.626056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:13.678 [2024-09-28 08:44:51.628497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:13.678 [2024-09-28 08:44:51.628644] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:13.678 [2024-09-28 08:44:51.628765] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:13.678 [2024-09-28 08:44:51.628820] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:13.678 [2024-09-28 08:44:51.628857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:13.678 request: 00:07:13.678 { 00:07:13.678 "name": "raid_bdev1", 00:07:13.678 "raid_level": "raid0", 00:07:13.678 "base_bdevs": [ 00:07:13.678 "malloc1", 00:07:13.678 "malloc2" 00:07:13.678 ], 00:07:13.678 "strip_size_kb": 64, 00:07:13.678 "superblock": false, 00:07:13.678 "method": "bdev_raid_create", 00:07:13.678 "req_id": 1 00:07:13.678 } 00:07:13.678 Got JSON-RPC error response 00:07:13.678 response: 00:07:13.678 { 00:07:13.678 "code": -17, 00:07:13.678 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:13.678 } 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.678 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.938 [2024-09-28 08:44:51.689907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:13.938 [2024-09-28 08:44:51.690013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.938 [2024-09-28 08:44:51.690051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:13.938 [2024-09-28 08:44:51.690082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.938 [2024-09-28 08:44:51.692691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.938 [2024-09-28 08:44:51.692762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:13.938 [2024-09-28 08:44:51.692867] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:13.938 [2024-09-28 08:44:51.692958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:13.938 pt1 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.938 "name": "raid_bdev1", 00:07:13.938 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:13.938 "strip_size_kb": 64, 00:07:13.938 "state": "configuring", 00:07:13.938 "raid_level": "raid0", 00:07:13.938 "superblock": true, 00:07:13.938 "num_base_bdevs": 2, 00:07:13.938 "num_base_bdevs_discovered": 1, 00:07:13.938 "num_base_bdevs_operational": 2, 00:07:13.938 "base_bdevs_list": [ 00:07:13.938 { 00:07:13.938 "name": "pt1", 00:07:13.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:13.938 "is_configured": true, 00:07:13.938 "data_offset": 2048, 00:07:13.938 "data_size": 63488 00:07:13.938 }, 00:07:13.938 { 00:07:13.938 "name": null, 00:07:13.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:13.938 "is_configured": false, 00:07:13.938 "data_offset": 2048, 00:07:13.938 "data_size": 63488 00:07:13.938 } 00:07:13.938 ] 00:07:13.938 }' 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.938 08:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.198 [2024-09-28 08:44:52.129176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:14.198 [2024-09-28 08:44:52.129252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.198 [2024-09-28 08:44:52.129275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:14.198 [2024-09-28 08:44:52.129286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.198 [2024-09-28 08:44:52.129792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.198 [2024-09-28 08:44:52.129814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:14.198 [2024-09-28 08:44:52.129890] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:14.198 [2024-09-28 08:44:52.129915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:14.198 [2024-09-28 08:44:52.130029] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:14.198 [2024-09-28 08:44:52.130040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.198 [2024-09-28 08:44:52.130297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:14.198 [2024-09-28 08:44:52.130447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:14.198 [2024-09-28 08:44:52.130462] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:14.198 [2024-09-28 08:44:52.130625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.198 pt2 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.198 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.199 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.199 "name": "raid_bdev1", 00:07:14.199 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:14.199 "strip_size_kb": 64, 00:07:14.199 "state": "online", 00:07:14.199 "raid_level": "raid0", 00:07:14.199 "superblock": true, 00:07:14.199 "num_base_bdevs": 2, 00:07:14.199 "num_base_bdevs_discovered": 2, 00:07:14.199 "num_base_bdevs_operational": 2, 00:07:14.199 "base_bdevs_list": [ 00:07:14.199 { 00:07:14.199 "name": "pt1", 00:07:14.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.199 "is_configured": true, 00:07:14.199 "data_offset": 2048, 00:07:14.199 "data_size": 63488 00:07:14.199 }, 00:07:14.199 { 00:07:14.199 "name": "pt2", 00:07:14.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.199 "is_configured": true, 00:07:14.199 "data_offset": 2048, 00:07:14.199 "data_size": 63488 00:07:14.199 } 00:07:14.199 ] 00:07:14.199 }' 00:07:14.199 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.199 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.768 [2024-09-28 08:44:52.560724] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.768 "name": "raid_bdev1", 00:07:14.768 "aliases": [ 00:07:14.768 "fa75aff7-7fd2-4612-904d-2dc2f628b026" 00:07:14.768 ], 00:07:14.768 "product_name": "Raid Volume", 00:07:14.768 "block_size": 512, 00:07:14.768 "num_blocks": 126976, 00:07:14.768 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:14.768 "assigned_rate_limits": { 00:07:14.768 "rw_ios_per_sec": 0, 00:07:14.768 "rw_mbytes_per_sec": 0, 00:07:14.768 "r_mbytes_per_sec": 0, 00:07:14.768 "w_mbytes_per_sec": 0 00:07:14.768 }, 00:07:14.768 "claimed": false, 00:07:14.768 "zoned": false, 00:07:14.768 "supported_io_types": { 00:07:14.768 "read": true, 00:07:14.768 "write": true, 00:07:14.768 "unmap": true, 00:07:14.768 "flush": true, 00:07:14.768 "reset": true, 00:07:14.768 "nvme_admin": false, 00:07:14.768 "nvme_io": false, 00:07:14.768 "nvme_io_md": false, 00:07:14.768 "write_zeroes": true, 00:07:14.768 "zcopy": false, 00:07:14.768 "get_zone_info": false, 00:07:14.768 "zone_management": false, 00:07:14.768 "zone_append": false, 00:07:14.768 "compare": false, 00:07:14.768 "compare_and_write": false, 00:07:14.768 "abort": false, 00:07:14.768 "seek_hole": false, 00:07:14.768 "seek_data": false, 00:07:14.768 "copy": false, 00:07:14.768 "nvme_iov_md": false 00:07:14.768 }, 00:07:14.768 "memory_domains": [ 00:07:14.768 { 00:07:14.768 "dma_device_id": "system", 00:07:14.768 "dma_device_type": 1 00:07:14.768 }, 00:07:14.768 { 00:07:14.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.768 "dma_device_type": 2 00:07:14.768 }, 00:07:14.768 { 00:07:14.768 "dma_device_id": "system", 00:07:14.768 "dma_device_type": 1 00:07:14.768 }, 00:07:14.768 { 00:07:14.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.768 "dma_device_type": 2 00:07:14.768 } 00:07:14.768 ], 00:07:14.768 "driver_specific": { 00:07:14.768 "raid": { 00:07:14.768 "uuid": "fa75aff7-7fd2-4612-904d-2dc2f628b026", 00:07:14.768 "strip_size_kb": 64, 00:07:14.768 "state": "online", 00:07:14.768 "raid_level": "raid0", 00:07:14.768 "superblock": true, 00:07:14.768 "num_base_bdevs": 2, 00:07:14.768 "num_base_bdevs_discovered": 2, 00:07:14.768 "num_base_bdevs_operational": 2, 00:07:14.768 "base_bdevs_list": [ 00:07:14.768 { 00:07:14.768 "name": "pt1", 00:07:14.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.768 "is_configured": true, 00:07:14.768 "data_offset": 2048, 00:07:14.768 "data_size": 63488 00:07:14.768 }, 00:07:14.768 { 00:07:14.768 "name": "pt2", 00:07:14.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.768 "is_configured": true, 00:07:14.768 "data_offset": 2048, 00:07:14.768 "data_size": 63488 00:07:14.768 } 00:07:14.768 ] 00:07:14.768 } 00:07:14.768 } 00:07:14.768 }' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:14.768 pt2' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.768 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.027 [2024-09-28 08:44:52.780305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fa75aff7-7fd2-4612-904d-2dc2f628b026 '!=' fa75aff7-7fd2-4612-904d-2dc2f628b026 ']' 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61194 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61194 ']' 00:07:15.027 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61194 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61194 00:07:15.028 killing process with pid 61194 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61194' 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61194 00:07:15.028 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61194 00:07:15.028 [2024-09-28 08:44:52.847342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.028 [2024-09-28 08:44:52.847436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.028 [2024-09-28 08:44:52.847494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.028 [2024-09-28 08:44:52.847506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:15.287 [2024-09-28 08:44:53.061166] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.666 08:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:16.666 00:07:16.666 real 0m4.710s 00:07:16.666 user 0m6.380s 00:07:16.666 sys 0m0.839s 00:07:16.666 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.666 ************************************ 00:07:16.666 END TEST raid_superblock_test 00:07:16.666 ************************************ 00:07:16.666 08:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.666 08:44:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:16.666 08:44:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.666 08:44:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.666 08:44:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.666 ************************************ 00:07:16.666 START TEST raid_read_error_test 00:07:16.666 ************************************ 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fJEYp9hiwl 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61400 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61400 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61400 ']' 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.666 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.666 [2024-09-28 08:44:54.571683] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:16.666 [2024-09-28 08:44:54.571922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61400 ] 00:07:16.926 [2024-09-28 08:44:54.741098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.185 [2024-09-28 08:44:54.989905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.444 [2024-09-28 08:44:55.217357] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.444 [2024-09-28 08:44:55.217396] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.444 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 BaseBdev1_malloc 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 true 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 [2024-09-28 08:44:55.457339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.704 [2024-09-28 08:44:55.457407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.704 [2024-09-28 08:44:55.457425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.704 [2024-09-28 08:44:55.457437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.704 [2024-09-28 08:44:55.459826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.704 [2024-09-28 08:44:55.459862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.704 BaseBdev1 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 BaseBdev2_malloc 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 true 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.704 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.704 [2024-09-28 08:44:55.543803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.704 [2024-09-28 08:44:55.543947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.704 [2024-09-28 08:44:55.543967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.704 [2024-09-28 08:44:55.543978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.705 [2024-09-28 08:44:55.546347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.705 [2024-09-28 08:44:55.546384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.705 BaseBdev2 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.705 [2024-09-28 08:44:55.551864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.705 [2024-09-28 08:44:55.553902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.705 [2024-09-28 08:44:55.554171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:17.705 [2024-09-28 08:44:55.554192] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.705 [2024-09-28 08:44:55.554426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.705 [2024-09-28 08:44:55.554592] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:17.705 [2024-09-28 08:44:55.554602] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:17.705 [2024-09-28 08:44:55.554784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.705 "name": "raid_bdev1", 00:07:17.705 "uuid": "4e4f7f2b-bc91-4e08-96f9-2f255feb8e62", 00:07:17.705 "strip_size_kb": 64, 00:07:17.705 "state": "online", 00:07:17.705 "raid_level": "raid0", 00:07:17.705 "superblock": true, 00:07:17.705 "num_base_bdevs": 2, 00:07:17.705 "num_base_bdevs_discovered": 2, 00:07:17.705 "num_base_bdevs_operational": 2, 00:07:17.705 "base_bdevs_list": [ 00:07:17.705 { 00:07:17.705 "name": "BaseBdev1", 00:07:17.705 "uuid": "d6192cf8-7780-54f0-b7cd-443410726e71", 00:07:17.705 "is_configured": true, 00:07:17.705 "data_offset": 2048, 00:07:17.705 "data_size": 63488 00:07:17.705 }, 00:07:17.705 { 00:07:17.705 "name": "BaseBdev2", 00:07:17.705 "uuid": "550f0bc3-2654-529c-88f7-c0f864b512bc", 00:07:17.705 "is_configured": true, 00:07:17.705 "data_offset": 2048, 00:07:17.705 "data_size": 63488 00:07:17.705 } 00:07:17.705 ] 00:07:17.705 }' 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.705 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.273 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:18.273 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.273 [2024-09-28 08:44:56.076520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.210 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.210 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.211 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.211 "name": "raid_bdev1", 00:07:19.211 "uuid": "4e4f7f2b-bc91-4e08-96f9-2f255feb8e62", 00:07:19.211 "strip_size_kb": 64, 00:07:19.211 "state": "online", 00:07:19.211 "raid_level": "raid0", 00:07:19.211 "superblock": true, 00:07:19.211 "num_base_bdevs": 2, 00:07:19.211 "num_base_bdevs_discovered": 2, 00:07:19.211 "num_base_bdevs_operational": 2, 00:07:19.211 "base_bdevs_list": [ 00:07:19.211 { 00:07:19.211 "name": "BaseBdev1", 00:07:19.211 "uuid": "d6192cf8-7780-54f0-b7cd-443410726e71", 00:07:19.211 "is_configured": true, 00:07:19.211 "data_offset": 2048, 00:07:19.211 "data_size": 63488 00:07:19.211 }, 00:07:19.211 { 00:07:19.211 "name": "BaseBdev2", 00:07:19.211 "uuid": "550f0bc3-2654-529c-88f7-c0f864b512bc", 00:07:19.211 "is_configured": true, 00:07:19.211 "data_offset": 2048, 00:07:19.211 "data_size": 63488 00:07:19.211 } 00:07:19.211 ] 00:07:19.211 }' 00:07:19.211 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.211 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.471 [2024-09-28 08:44:57.457211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.471 [2024-09-28 08:44:57.457250] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.471 [2024-09-28 08:44:57.459918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.471 [2024-09-28 08:44:57.459975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.471 [2024-09-28 08:44:57.460012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.471 [2024-09-28 08:44:57.460025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:19.471 { 00:07:19.471 "results": [ 00:07:19.471 { 00:07:19.471 "job": "raid_bdev1", 00:07:19.471 "core_mask": "0x1", 00:07:19.471 "workload": "randrw", 00:07:19.471 "percentage": 50, 00:07:19.471 "status": "finished", 00:07:19.471 "queue_depth": 1, 00:07:19.471 "io_size": 131072, 00:07:19.471 "runtime": 1.381208, 00:07:19.471 "iops": 15186.70612970675, 00:07:19.471 "mibps": 1898.3382662133438, 00:07:19.471 "io_failed": 1, 00:07:19.471 "io_timeout": 0, 00:07:19.471 "avg_latency_us": 92.50767734176733, 00:07:19.471 "min_latency_us": 24.482096069868994, 00:07:19.471 "max_latency_us": 1402.2986899563318 00:07:19.471 } 00:07:19.471 ], 00:07:19.471 "core_count": 1 00:07:19.471 } 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61400 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61400 ']' 00:07:19.471 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61400 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61400 00:07:19.731 killing process with pid 61400 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61400' 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61400 00:07:19.731 [2024-09-28 08:44:57.505251] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.731 08:44:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61400 00:07:19.731 [2024-09-28 08:44:57.658904] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fJEYp9hiwl 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:21.110 00:07:21.110 real 0m4.586s 00:07:21.110 user 0m5.310s 00:07:21.110 sys 0m0.655s 00:07:21.110 ************************************ 00:07:21.110 END TEST raid_read_error_test 00:07:21.110 ************************************ 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.110 08:44:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.371 08:44:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:21.371 08:44:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:21.371 08:44:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.371 08:44:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.371 ************************************ 00:07:21.371 START TEST raid_write_error_test 00:07:21.371 ************************************ 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CqkfuIpVXS 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61551 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61551 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61551 ']' 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.371 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.371 [2024-09-28 08:44:59.232236] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:21.371 [2024-09-28 08:44:59.232473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61551 ] 00:07:21.631 [2024-09-28 08:44:59.401066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.890 [2024-09-28 08:44:59.645606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.890 [2024-09-28 08:44:59.873033] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.890 [2024-09-28 08:44:59.873135] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.149 BaseBdev1_malloc 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.149 true 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.149 [2024-09-28 08:45:00.108504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:22.149 [2024-09-28 08:45:00.108567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.149 [2024-09-28 08:45:00.108601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:22.149 [2024-09-28 08:45:00.108614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.149 [2024-09-28 08:45:00.111019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.149 [2024-09-28 08:45:00.111056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:22.149 BaseBdev1 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.149 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.409 BaseBdev2_malloc 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.409 true 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.409 [2024-09-28 08:45:00.200508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:22.409 [2024-09-28 08:45:00.200569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.409 [2024-09-28 08:45:00.200585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:22.409 [2024-09-28 08:45:00.200597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.409 [2024-09-28 08:45:00.203293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.409 [2024-09-28 08:45:00.203350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:22.409 BaseBdev2 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.409 [2024-09-28 08:45:00.208607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.409 [2024-09-28 08:45:00.211283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.409 [2024-09-28 08:45:00.211543] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.409 [2024-09-28 08:45:00.211574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.409 [2024-09-28 08:45:00.211880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.409 [2024-09-28 08:45:00.212115] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.409 [2024-09-28 08:45:00.212141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:22.409 [2024-09-28 08:45:00.212337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.409 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.410 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.410 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.410 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.410 "name": "raid_bdev1", 00:07:22.410 "uuid": "aea6bbb0-71db-4acb-8cbd-a839633a52c4", 00:07:22.410 "strip_size_kb": 64, 00:07:22.410 "state": "online", 00:07:22.410 "raid_level": "raid0", 00:07:22.410 "superblock": true, 00:07:22.410 "num_base_bdevs": 2, 00:07:22.410 "num_base_bdevs_discovered": 2, 00:07:22.410 "num_base_bdevs_operational": 2, 00:07:22.410 "base_bdevs_list": [ 00:07:22.410 { 00:07:22.410 "name": "BaseBdev1", 00:07:22.410 "uuid": "089a84c2-7cf8-5eef-abf3-075f2a34204a", 00:07:22.410 "is_configured": true, 00:07:22.410 "data_offset": 2048, 00:07:22.410 "data_size": 63488 00:07:22.410 }, 00:07:22.410 { 00:07:22.410 "name": "BaseBdev2", 00:07:22.410 "uuid": "149924d9-11fa-5b52-95f4-298894e924ad", 00:07:22.410 "is_configured": true, 00:07:22.410 "data_offset": 2048, 00:07:22.410 "data_size": 63488 00:07:22.410 } 00:07:22.410 ] 00:07:22.410 }' 00:07:22.410 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.410 08:45:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.669 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.669 08:45:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.929 [2024-09-28 08:45:00.713056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.867 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.867 "name": "raid_bdev1", 00:07:23.867 "uuid": "aea6bbb0-71db-4acb-8cbd-a839633a52c4", 00:07:23.867 "strip_size_kb": 64, 00:07:23.867 "state": "online", 00:07:23.867 "raid_level": "raid0", 00:07:23.867 "superblock": true, 00:07:23.868 "num_base_bdevs": 2, 00:07:23.868 "num_base_bdevs_discovered": 2, 00:07:23.868 "num_base_bdevs_operational": 2, 00:07:23.868 "base_bdevs_list": [ 00:07:23.868 { 00:07:23.868 "name": "BaseBdev1", 00:07:23.868 "uuid": "089a84c2-7cf8-5eef-abf3-075f2a34204a", 00:07:23.868 "is_configured": true, 00:07:23.868 "data_offset": 2048, 00:07:23.868 "data_size": 63488 00:07:23.868 }, 00:07:23.868 { 00:07:23.868 "name": "BaseBdev2", 00:07:23.868 "uuid": "149924d9-11fa-5b52-95f4-298894e924ad", 00:07:23.868 "is_configured": true, 00:07:23.868 "data_offset": 2048, 00:07:23.868 "data_size": 63488 00:07:23.868 } 00:07:23.868 ] 00:07:23.868 }' 00:07:23.868 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.868 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.127 08:45:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.127 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.127 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.127 [2024-09-28 08:45:02.117693] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.127 [2024-09-28 08:45:02.117735] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.127 [2024-09-28 08:45:02.120474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.127 [2024-09-28 08:45:02.120528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.127 [2024-09-28 08:45:02.120564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.127 [2024-09-28 08:45:02.120576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:24.387 { 00:07:24.387 "results": [ 00:07:24.387 { 00:07:24.387 "job": "raid_bdev1", 00:07:24.387 "core_mask": "0x1", 00:07:24.387 "workload": "randrw", 00:07:24.387 "percentage": 50, 00:07:24.387 "status": "finished", 00:07:24.387 "queue_depth": 1, 00:07:24.387 "io_size": 131072, 00:07:24.387 "runtime": 1.405317, 00:07:24.387 "iops": 14961.748843855159, 00:07:24.387 "mibps": 1870.2186054818949, 00:07:24.387 "io_failed": 1, 00:07:24.387 "io_timeout": 0, 00:07:24.387 "avg_latency_us": 93.91191288056966, 00:07:24.387 "min_latency_us": 24.817467248908297, 00:07:24.387 "max_latency_us": 1352.216593886463 00:07:24.387 } 00:07:24.387 ], 00:07:24.387 "core_count": 1 00:07:24.387 } 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61551 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61551 ']' 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61551 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61551 00:07:24.387 killing process with pid 61551 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61551' 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61551 00:07:24.387 [2024-09-28 08:45:02.169200] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.387 08:45:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61551 00:07:24.387 [2024-09-28 08:45:02.312341] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CqkfuIpVXS 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:25.788 00:07:25.788 real 0m4.592s 00:07:25.788 user 0m5.307s 00:07:25.788 sys 0m0.685s 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.788 08:45:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.788 ************************************ 00:07:25.788 END TEST raid_write_error_test 00:07:25.788 ************************************ 00:07:25.788 08:45:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:25.788 08:45:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:25.788 08:45:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:25.788 08:45:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.788 08:45:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.054 ************************************ 00:07:26.054 START TEST raid_state_function_test 00:07:26.054 ************************************ 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61689 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61689' 00:07:26.054 Process raid pid: 61689 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61689 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61689 ']' 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.054 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.054 [2024-09-28 08:45:03.898800] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:26.054 [2024-09-28 08:45:03.898929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.314 [2024-09-28 08:45:04.068618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.573 [2024-09-28 08:45:04.313957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.573 [2024-09-28 08:45:04.550346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.573 [2024-09-28 08:45:04.550381] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.833 [2024-09-28 08:45:04.723034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.833 [2024-09-28 08:45:04.723100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.833 [2024-09-28 08:45:04.723113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.833 [2024-09-28 08:45:04.723139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.833 "name": "Existed_Raid", 00:07:26.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.833 "strip_size_kb": 64, 00:07:26.833 "state": "configuring", 00:07:26.833 "raid_level": "concat", 00:07:26.833 "superblock": false, 00:07:26.833 "num_base_bdevs": 2, 00:07:26.833 "num_base_bdevs_discovered": 0, 00:07:26.833 "num_base_bdevs_operational": 2, 00:07:26.833 "base_bdevs_list": [ 00:07:26.833 { 00:07:26.833 "name": "BaseBdev1", 00:07:26.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.833 "is_configured": false, 00:07:26.833 "data_offset": 0, 00:07:26.833 "data_size": 0 00:07:26.833 }, 00:07:26.833 { 00:07:26.833 "name": "BaseBdev2", 00:07:26.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.833 "is_configured": false, 00:07:26.833 "data_offset": 0, 00:07:26.833 "data_size": 0 00:07:26.833 } 00:07:26.833 ] 00:07:26.833 }' 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.833 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [2024-09-28 08:45:05.142278] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.402 [2024-09-28 08:45:05.142323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [2024-09-28 08:45:05.150287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.402 [2024-09-28 08:45:05.150334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.402 [2024-09-28 08:45:05.150344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.402 [2024-09-28 08:45:05.150357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [2024-09-28 08:45:05.232894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.402 BaseBdev1 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 [ 00:07:27.402 { 00:07:27.402 "name": "BaseBdev1", 00:07:27.402 "aliases": [ 00:07:27.402 "424b762d-62bb-4087-8cbd-3d79857d2681" 00:07:27.402 ], 00:07:27.402 "product_name": "Malloc disk", 00:07:27.402 "block_size": 512, 00:07:27.402 "num_blocks": 65536, 00:07:27.402 "uuid": "424b762d-62bb-4087-8cbd-3d79857d2681", 00:07:27.402 "assigned_rate_limits": { 00:07:27.402 "rw_ios_per_sec": 0, 00:07:27.402 "rw_mbytes_per_sec": 0, 00:07:27.402 "r_mbytes_per_sec": 0, 00:07:27.402 "w_mbytes_per_sec": 0 00:07:27.402 }, 00:07:27.402 "claimed": true, 00:07:27.402 "claim_type": "exclusive_write", 00:07:27.402 "zoned": false, 00:07:27.402 "supported_io_types": { 00:07:27.402 "read": true, 00:07:27.402 "write": true, 00:07:27.402 "unmap": true, 00:07:27.402 "flush": true, 00:07:27.402 "reset": true, 00:07:27.402 "nvme_admin": false, 00:07:27.402 "nvme_io": false, 00:07:27.402 "nvme_io_md": false, 00:07:27.402 "write_zeroes": true, 00:07:27.402 "zcopy": true, 00:07:27.402 "get_zone_info": false, 00:07:27.402 "zone_management": false, 00:07:27.402 "zone_append": false, 00:07:27.402 "compare": false, 00:07:27.402 "compare_and_write": false, 00:07:27.402 "abort": true, 00:07:27.402 "seek_hole": false, 00:07:27.402 "seek_data": false, 00:07:27.402 "copy": true, 00:07:27.402 "nvme_iov_md": false 00:07:27.402 }, 00:07:27.402 "memory_domains": [ 00:07:27.402 { 00:07:27.402 "dma_device_id": "system", 00:07:27.402 "dma_device_type": 1 00:07:27.402 }, 00:07:27.402 { 00:07:27.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.402 "dma_device_type": 2 00:07:27.402 } 00:07:27.402 ], 00:07:27.402 "driver_specific": {} 00:07:27.402 } 00:07:27.402 ] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.402 "name": "Existed_Raid", 00:07:27.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.402 "strip_size_kb": 64, 00:07:27.402 "state": "configuring", 00:07:27.402 "raid_level": "concat", 00:07:27.402 "superblock": false, 00:07:27.402 "num_base_bdevs": 2, 00:07:27.402 "num_base_bdevs_discovered": 1, 00:07:27.402 "num_base_bdevs_operational": 2, 00:07:27.402 "base_bdevs_list": [ 00:07:27.402 { 00:07:27.402 "name": "BaseBdev1", 00:07:27.402 "uuid": "424b762d-62bb-4087-8cbd-3d79857d2681", 00:07:27.402 "is_configured": true, 00:07:27.402 "data_offset": 0, 00:07:27.402 "data_size": 65536 00:07:27.402 }, 00:07:27.402 { 00:07:27.402 "name": "BaseBdev2", 00:07:27.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.402 "is_configured": false, 00:07:27.402 "data_offset": 0, 00:07:27.402 "data_size": 0 00:07:27.402 } 00:07:27.402 ] 00:07:27.402 }' 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.402 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.971 [2024-09-28 08:45:05.688142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.971 [2024-09-28 08:45:05.688201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.971 [2024-09-28 08:45:05.696164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.971 [2024-09-28 08:45:05.698300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.971 [2024-09-28 08:45:05.698346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.971 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.972 "name": "Existed_Raid", 00:07:27.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.972 "strip_size_kb": 64, 00:07:27.972 "state": "configuring", 00:07:27.972 "raid_level": "concat", 00:07:27.972 "superblock": false, 00:07:27.972 "num_base_bdevs": 2, 00:07:27.972 "num_base_bdevs_discovered": 1, 00:07:27.972 "num_base_bdevs_operational": 2, 00:07:27.972 "base_bdevs_list": [ 00:07:27.972 { 00:07:27.972 "name": "BaseBdev1", 00:07:27.972 "uuid": "424b762d-62bb-4087-8cbd-3d79857d2681", 00:07:27.972 "is_configured": true, 00:07:27.972 "data_offset": 0, 00:07:27.972 "data_size": 65536 00:07:27.972 }, 00:07:27.972 { 00:07:27.972 "name": "BaseBdev2", 00:07:27.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.972 "is_configured": false, 00:07:27.972 "data_offset": 0, 00:07:27.972 "data_size": 0 00:07:27.972 } 00:07:27.972 ] 00:07:27.972 }' 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.972 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 [2024-09-28 08:45:06.143212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.231 [2024-09-28 08:45:06.143266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.231 [2024-09-28 08:45:06.143275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:28.231 [2024-09-28 08:45:06.143586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.231 [2024-09-28 08:45:06.143799] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.231 [2024-09-28 08:45:06.143820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:28.231 [2024-09-28 08:45:06.144163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.231 BaseBdev2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 [ 00:07:28.231 { 00:07:28.231 "name": "BaseBdev2", 00:07:28.231 "aliases": [ 00:07:28.231 "2043e543-464b-4bab-bf00-5294a7702b49" 00:07:28.231 ], 00:07:28.231 "product_name": "Malloc disk", 00:07:28.231 "block_size": 512, 00:07:28.231 "num_blocks": 65536, 00:07:28.231 "uuid": "2043e543-464b-4bab-bf00-5294a7702b49", 00:07:28.231 "assigned_rate_limits": { 00:07:28.231 "rw_ios_per_sec": 0, 00:07:28.231 "rw_mbytes_per_sec": 0, 00:07:28.231 "r_mbytes_per_sec": 0, 00:07:28.231 "w_mbytes_per_sec": 0 00:07:28.231 }, 00:07:28.231 "claimed": true, 00:07:28.231 "claim_type": "exclusive_write", 00:07:28.231 "zoned": false, 00:07:28.231 "supported_io_types": { 00:07:28.231 "read": true, 00:07:28.231 "write": true, 00:07:28.231 "unmap": true, 00:07:28.231 "flush": true, 00:07:28.231 "reset": true, 00:07:28.231 "nvme_admin": false, 00:07:28.231 "nvme_io": false, 00:07:28.231 "nvme_io_md": false, 00:07:28.231 "write_zeroes": true, 00:07:28.231 "zcopy": true, 00:07:28.231 "get_zone_info": false, 00:07:28.231 "zone_management": false, 00:07:28.231 "zone_append": false, 00:07:28.231 "compare": false, 00:07:28.231 "compare_and_write": false, 00:07:28.231 "abort": true, 00:07:28.231 "seek_hole": false, 00:07:28.231 "seek_data": false, 00:07:28.231 "copy": true, 00:07:28.231 "nvme_iov_md": false 00:07:28.231 }, 00:07:28.231 "memory_domains": [ 00:07:28.231 { 00:07:28.231 "dma_device_id": "system", 00:07:28.231 "dma_device_type": 1 00:07:28.231 }, 00:07:28.231 { 00:07:28.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.231 "dma_device_type": 2 00:07:28.231 } 00:07:28.231 ], 00:07:28.231 "driver_specific": {} 00:07:28.231 } 00:07:28.231 ] 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.231 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.232 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.232 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.232 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.490 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.490 "name": "Existed_Raid", 00:07:28.490 "uuid": "c4108d4c-3158-4f69-b4d9-623428cf468d", 00:07:28.490 "strip_size_kb": 64, 00:07:28.490 "state": "online", 00:07:28.490 "raid_level": "concat", 00:07:28.490 "superblock": false, 00:07:28.490 "num_base_bdevs": 2, 00:07:28.490 "num_base_bdevs_discovered": 2, 00:07:28.490 "num_base_bdevs_operational": 2, 00:07:28.490 "base_bdevs_list": [ 00:07:28.490 { 00:07:28.490 "name": "BaseBdev1", 00:07:28.490 "uuid": "424b762d-62bb-4087-8cbd-3d79857d2681", 00:07:28.490 "is_configured": true, 00:07:28.490 "data_offset": 0, 00:07:28.490 "data_size": 65536 00:07:28.490 }, 00:07:28.490 { 00:07:28.490 "name": "BaseBdev2", 00:07:28.490 "uuid": "2043e543-464b-4bab-bf00-5294a7702b49", 00:07:28.490 "is_configured": true, 00:07:28.490 "data_offset": 0, 00:07:28.490 "data_size": 65536 00:07:28.490 } 00:07:28.490 ] 00:07:28.490 }' 00:07:28.490 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.490 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 [2024-09-28 08:45:06.606968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.750 "name": "Existed_Raid", 00:07:28.750 "aliases": [ 00:07:28.750 "c4108d4c-3158-4f69-b4d9-623428cf468d" 00:07:28.750 ], 00:07:28.750 "product_name": "Raid Volume", 00:07:28.750 "block_size": 512, 00:07:28.750 "num_blocks": 131072, 00:07:28.750 "uuid": "c4108d4c-3158-4f69-b4d9-623428cf468d", 00:07:28.750 "assigned_rate_limits": { 00:07:28.750 "rw_ios_per_sec": 0, 00:07:28.750 "rw_mbytes_per_sec": 0, 00:07:28.750 "r_mbytes_per_sec": 0, 00:07:28.750 "w_mbytes_per_sec": 0 00:07:28.750 }, 00:07:28.750 "claimed": false, 00:07:28.750 "zoned": false, 00:07:28.750 "supported_io_types": { 00:07:28.750 "read": true, 00:07:28.750 "write": true, 00:07:28.750 "unmap": true, 00:07:28.750 "flush": true, 00:07:28.750 "reset": true, 00:07:28.750 "nvme_admin": false, 00:07:28.750 "nvme_io": false, 00:07:28.750 "nvme_io_md": false, 00:07:28.750 "write_zeroes": true, 00:07:28.750 "zcopy": false, 00:07:28.750 "get_zone_info": false, 00:07:28.750 "zone_management": false, 00:07:28.750 "zone_append": false, 00:07:28.750 "compare": false, 00:07:28.750 "compare_and_write": false, 00:07:28.750 "abort": false, 00:07:28.750 "seek_hole": false, 00:07:28.750 "seek_data": false, 00:07:28.750 "copy": false, 00:07:28.750 "nvme_iov_md": false 00:07:28.750 }, 00:07:28.750 "memory_domains": [ 00:07:28.750 { 00:07:28.750 "dma_device_id": "system", 00:07:28.750 "dma_device_type": 1 00:07:28.750 }, 00:07:28.750 { 00:07:28.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.750 "dma_device_type": 2 00:07:28.750 }, 00:07:28.750 { 00:07:28.750 "dma_device_id": "system", 00:07:28.750 "dma_device_type": 1 00:07:28.750 }, 00:07:28.750 { 00:07:28.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.750 "dma_device_type": 2 00:07:28.750 } 00:07:28.750 ], 00:07:28.750 "driver_specific": { 00:07:28.750 "raid": { 00:07:28.750 "uuid": "c4108d4c-3158-4f69-b4d9-623428cf468d", 00:07:28.750 "strip_size_kb": 64, 00:07:28.750 "state": "online", 00:07:28.750 "raid_level": "concat", 00:07:28.750 "superblock": false, 00:07:28.750 "num_base_bdevs": 2, 00:07:28.750 "num_base_bdevs_discovered": 2, 00:07:28.750 "num_base_bdevs_operational": 2, 00:07:28.750 "base_bdevs_list": [ 00:07:28.750 { 00:07:28.750 "name": "BaseBdev1", 00:07:28.750 "uuid": "424b762d-62bb-4087-8cbd-3d79857d2681", 00:07:28.750 "is_configured": true, 00:07:28.750 "data_offset": 0, 00:07:28.750 "data_size": 65536 00:07:28.750 }, 00:07:28.750 { 00:07:28.750 "name": "BaseBdev2", 00:07:28.750 "uuid": "2043e543-464b-4bab-bf00-5294a7702b49", 00:07:28.750 "is_configured": true, 00:07:28.750 "data_offset": 0, 00:07:28.750 "data_size": 65536 00:07:28.750 } 00:07:28.750 ] 00:07:28.750 } 00:07:28.750 } 00:07:28.750 }' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.750 BaseBdev2' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 [2024-09-28 08:45:06.810354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.010 [2024-09-28 08:45:06.810390] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.010 [2024-09-28 08:45:06.810445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.010 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.010 "name": "Existed_Raid", 00:07:29.010 "uuid": "c4108d4c-3158-4f69-b4d9-623428cf468d", 00:07:29.010 "strip_size_kb": 64, 00:07:29.010 "state": "offline", 00:07:29.010 "raid_level": "concat", 00:07:29.010 "superblock": false, 00:07:29.010 "num_base_bdevs": 2, 00:07:29.010 "num_base_bdevs_discovered": 1, 00:07:29.010 "num_base_bdevs_operational": 1, 00:07:29.010 "base_bdevs_list": [ 00:07:29.010 { 00:07:29.010 "name": null, 00:07:29.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.010 "is_configured": false, 00:07:29.010 "data_offset": 0, 00:07:29.010 "data_size": 65536 00:07:29.010 }, 00:07:29.010 { 00:07:29.010 "name": "BaseBdev2", 00:07:29.010 "uuid": "2043e543-464b-4bab-bf00-5294a7702b49", 00:07:29.010 "is_configured": true, 00:07:29.010 "data_offset": 0, 00:07:29.011 "data_size": 65536 00:07:29.011 } 00:07:29.011 ] 00:07:29.011 }' 00:07:29.011 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.011 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.579 [2024-09-28 08:45:07.404343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:29.579 [2024-09-28 08:45:07.404408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61689 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61689 ']' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61689 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.579 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61689 00:07:29.838 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.838 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.838 killing process with pid 61689 00:07:29.838 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61689' 00:07:29.838 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61689 00:07:29.838 [2024-09-28 08:45:07.599796] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.838 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61689 00:07:29.838 [2024-09-28 08:45:07.617268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.217 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.217 00:07:31.217 real 0m5.157s 00:07:31.217 user 0m7.121s 00:07:31.217 sys 0m0.972s 00:07:31.217 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.217 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.217 ************************************ 00:07:31.217 END TEST raid_state_function_test 00:07:31.217 ************************************ 00:07:31.217 08:45:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:31.217 08:45:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:31.217 08:45:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.217 08:45:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.217 ************************************ 00:07:31.217 START TEST raid_state_function_test_sb 00:07:31.217 ************************************ 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:31.217 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61947 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61947' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.218 Process raid pid: 61947 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61947 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61947 ']' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.218 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.218 [2024-09-28 08:45:09.122151] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:31.218 [2024-09-28 08:45:09.122284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.476 [2024-09-28 08:45:09.293788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.735 [2024-09-28 08:45:09.535722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.994 [2024-09-28 08:45:09.766500] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.994 [2024-09-28 08:45:09.766538] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.994 [2024-09-28 08:45:09.961294] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.994 [2024-09-28 08:45:09.961367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.994 [2024-09-28 08:45:09.961378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.994 [2024-09-28 08:45:09.961388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.994 08:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.253 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.253 "name": "Existed_Raid", 00:07:32.253 "uuid": "f3167efa-f589-4483-8b13-babffeec6117", 00:07:32.253 "strip_size_kb": 64, 00:07:32.253 "state": "configuring", 00:07:32.253 "raid_level": "concat", 00:07:32.254 "superblock": true, 00:07:32.254 "num_base_bdevs": 2, 00:07:32.254 "num_base_bdevs_discovered": 0, 00:07:32.254 "num_base_bdevs_operational": 2, 00:07:32.254 "base_bdevs_list": [ 00:07:32.254 { 00:07:32.254 "name": "BaseBdev1", 00:07:32.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.254 "is_configured": false, 00:07:32.254 "data_offset": 0, 00:07:32.254 "data_size": 0 00:07:32.254 }, 00:07:32.254 { 00:07:32.254 "name": "BaseBdev2", 00:07:32.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.254 "is_configured": false, 00:07:32.254 "data_offset": 0, 00:07:32.254 "data_size": 0 00:07:32.254 } 00:07:32.254 ] 00:07:32.254 }' 00:07:32.254 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.254 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.513 [2024-09-28 08:45:10.400456] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.513 [2024-09-28 08:45:10.400500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.513 [2024-09-28 08:45:10.408474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.513 [2024-09-28 08:45:10.408520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.513 [2024-09-28 08:45:10.408530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.513 [2024-09-28 08:45:10.408544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.513 [2024-09-28 08:45:10.490101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.513 BaseBdev1 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.513 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.773 [ 00:07:32.773 { 00:07:32.773 "name": "BaseBdev1", 00:07:32.773 "aliases": [ 00:07:32.773 "21e0aa99-86c3-47b3-86cd-e2525f61d481" 00:07:32.773 ], 00:07:32.773 "product_name": "Malloc disk", 00:07:32.773 "block_size": 512, 00:07:32.773 "num_blocks": 65536, 00:07:32.773 "uuid": "21e0aa99-86c3-47b3-86cd-e2525f61d481", 00:07:32.773 "assigned_rate_limits": { 00:07:32.773 "rw_ios_per_sec": 0, 00:07:32.773 "rw_mbytes_per_sec": 0, 00:07:32.773 "r_mbytes_per_sec": 0, 00:07:32.773 "w_mbytes_per_sec": 0 00:07:32.773 }, 00:07:32.773 "claimed": true, 00:07:32.773 "claim_type": "exclusive_write", 00:07:32.773 "zoned": false, 00:07:32.773 "supported_io_types": { 00:07:32.773 "read": true, 00:07:32.773 "write": true, 00:07:32.773 "unmap": true, 00:07:32.773 "flush": true, 00:07:32.773 "reset": true, 00:07:32.773 "nvme_admin": false, 00:07:32.773 "nvme_io": false, 00:07:32.773 "nvme_io_md": false, 00:07:32.773 "write_zeroes": true, 00:07:32.773 "zcopy": true, 00:07:32.773 "get_zone_info": false, 00:07:32.773 "zone_management": false, 00:07:32.773 "zone_append": false, 00:07:32.773 "compare": false, 00:07:32.773 "compare_and_write": false, 00:07:32.773 "abort": true, 00:07:32.773 "seek_hole": false, 00:07:32.773 "seek_data": false, 00:07:32.773 "copy": true, 00:07:32.773 "nvme_iov_md": false 00:07:32.773 }, 00:07:32.773 "memory_domains": [ 00:07:32.773 { 00:07:32.773 "dma_device_id": "system", 00:07:32.773 "dma_device_type": 1 00:07:32.773 }, 00:07:32.773 { 00:07:32.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.773 "dma_device_type": 2 00:07:32.773 } 00:07:32.773 ], 00:07:32.773 "driver_specific": {} 00:07:32.773 } 00:07:32.773 ] 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.773 "name": "Existed_Raid", 00:07:32.773 "uuid": "2a3fda9b-a8a3-46dc-b4a3-d317d5ba5b75", 00:07:32.773 "strip_size_kb": 64, 00:07:32.773 "state": "configuring", 00:07:32.773 "raid_level": "concat", 00:07:32.773 "superblock": true, 00:07:32.773 "num_base_bdevs": 2, 00:07:32.773 "num_base_bdevs_discovered": 1, 00:07:32.773 "num_base_bdevs_operational": 2, 00:07:32.773 "base_bdevs_list": [ 00:07:32.773 { 00:07:32.773 "name": "BaseBdev1", 00:07:32.773 "uuid": "21e0aa99-86c3-47b3-86cd-e2525f61d481", 00:07:32.773 "is_configured": true, 00:07:32.773 "data_offset": 2048, 00:07:32.773 "data_size": 63488 00:07:32.773 }, 00:07:32.773 { 00:07:32.773 "name": "BaseBdev2", 00:07:32.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.773 "is_configured": false, 00:07:32.773 "data_offset": 0, 00:07:32.773 "data_size": 0 00:07:32.773 } 00:07:32.773 ] 00:07:32.773 }' 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.773 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.033 [2024-09-28 08:45:10.925367] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.033 [2024-09-28 08:45:10.925441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.033 [2024-09-28 08:45:10.933401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:33.033 [2024-09-28 08:45:10.935557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.033 [2024-09-28 08:45:10.935604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.033 "name": "Existed_Raid", 00:07:33.033 "uuid": "6e32c2b3-4549-4aa3-80e7-da429cc89ae8", 00:07:33.033 "strip_size_kb": 64, 00:07:33.033 "state": "configuring", 00:07:33.033 "raid_level": "concat", 00:07:33.033 "superblock": true, 00:07:33.033 "num_base_bdevs": 2, 00:07:33.033 "num_base_bdevs_discovered": 1, 00:07:33.033 "num_base_bdevs_operational": 2, 00:07:33.033 "base_bdevs_list": [ 00:07:33.033 { 00:07:33.033 "name": "BaseBdev1", 00:07:33.033 "uuid": "21e0aa99-86c3-47b3-86cd-e2525f61d481", 00:07:33.033 "is_configured": true, 00:07:33.033 "data_offset": 2048, 00:07:33.033 "data_size": 63488 00:07:33.033 }, 00:07:33.033 { 00:07:33.033 "name": "BaseBdev2", 00:07:33.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.033 "is_configured": false, 00:07:33.033 "data_offset": 0, 00:07:33.033 "data_size": 0 00:07:33.033 } 00:07:33.033 ] 00:07:33.033 }' 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.033 08:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.602 [2024-09-28 08:45:11.405366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.602 [2024-09-28 08:45:11.405702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.602 [2024-09-28 08:45:11.405737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.602 BaseBdev2 00:07:33.602 [2024-09-28 08:45:11.406178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.602 [2024-09-28 08:45:11.406340] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.602 [2024-09-28 08:45:11.406363] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:33.602 [2024-09-28 08:45:11.406527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.602 [ 00:07:33.602 { 00:07:33.602 "name": "BaseBdev2", 00:07:33.602 "aliases": [ 00:07:33.602 "60250822-4a17-496c-a6ed-3bd8acbcef5c" 00:07:33.602 ], 00:07:33.602 "product_name": "Malloc disk", 00:07:33.602 "block_size": 512, 00:07:33.602 "num_blocks": 65536, 00:07:33.602 "uuid": "60250822-4a17-496c-a6ed-3bd8acbcef5c", 00:07:33.602 "assigned_rate_limits": { 00:07:33.602 "rw_ios_per_sec": 0, 00:07:33.602 "rw_mbytes_per_sec": 0, 00:07:33.602 "r_mbytes_per_sec": 0, 00:07:33.602 "w_mbytes_per_sec": 0 00:07:33.602 }, 00:07:33.602 "claimed": true, 00:07:33.602 "claim_type": "exclusive_write", 00:07:33.602 "zoned": false, 00:07:33.602 "supported_io_types": { 00:07:33.602 "read": true, 00:07:33.602 "write": true, 00:07:33.602 "unmap": true, 00:07:33.602 "flush": true, 00:07:33.602 "reset": true, 00:07:33.602 "nvme_admin": false, 00:07:33.602 "nvme_io": false, 00:07:33.602 "nvme_io_md": false, 00:07:33.602 "write_zeroes": true, 00:07:33.602 "zcopy": true, 00:07:33.602 "get_zone_info": false, 00:07:33.602 "zone_management": false, 00:07:33.602 "zone_append": false, 00:07:33.602 "compare": false, 00:07:33.602 "compare_and_write": false, 00:07:33.602 "abort": true, 00:07:33.602 "seek_hole": false, 00:07:33.602 "seek_data": false, 00:07:33.602 "copy": true, 00:07:33.602 "nvme_iov_md": false 00:07:33.602 }, 00:07:33.602 "memory_domains": [ 00:07:33.602 { 00:07:33.602 "dma_device_id": "system", 00:07:33.602 "dma_device_type": 1 00:07:33.602 }, 00:07:33.602 { 00:07:33.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.602 "dma_device_type": 2 00:07:33.602 } 00:07:33.602 ], 00:07:33.602 "driver_specific": {} 00:07:33.602 } 00:07:33.602 ] 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.602 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.603 "name": "Existed_Raid", 00:07:33.603 "uuid": "6e32c2b3-4549-4aa3-80e7-da429cc89ae8", 00:07:33.603 "strip_size_kb": 64, 00:07:33.603 "state": "online", 00:07:33.603 "raid_level": "concat", 00:07:33.603 "superblock": true, 00:07:33.603 "num_base_bdevs": 2, 00:07:33.603 "num_base_bdevs_discovered": 2, 00:07:33.603 "num_base_bdevs_operational": 2, 00:07:33.603 "base_bdevs_list": [ 00:07:33.603 { 00:07:33.603 "name": "BaseBdev1", 00:07:33.603 "uuid": "21e0aa99-86c3-47b3-86cd-e2525f61d481", 00:07:33.603 "is_configured": true, 00:07:33.603 "data_offset": 2048, 00:07:33.603 "data_size": 63488 00:07:33.603 }, 00:07:33.603 { 00:07:33.603 "name": "BaseBdev2", 00:07:33.603 "uuid": "60250822-4a17-496c-a6ed-3bd8acbcef5c", 00:07:33.603 "is_configured": true, 00:07:33.603 "data_offset": 2048, 00:07:33.603 "data_size": 63488 00:07:33.603 } 00:07:33.603 ] 00:07:33.603 }' 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.603 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.171 [2024-09-28 08:45:11.872873] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.171 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.171 "name": "Existed_Raid", 00:07:34.171 "aliases": [ 00:07:34.171 "6e32c2b3-4549-4aa3-80e7-da429cc89ae8" 00:07:34.171 ], 00:07:34.171 "product_name": "Raid Volume", 00:07:34.171 "block_size": 512, 00:07:34.171 "num_blocks": 126976, 00:07:34.171 "uuid": "6e32c2b3-4549-4aa3-80e7-da429cc89ae8", 00:07:34.171 "assigned_rate_limits": { 00:07:34.172 "rw_ios_per_sec": 0, 00:07:34.172 "rw_mbytes_per_sec": 0, 00:07:34.172 "r_mbytes_per_sec": 0, 00:07:34.172 "w_mbytes_per_sec": 0 00:07:34.172 }, 00:07:34.172 "claimed": false, 00:07:34.172 "zoned": false, 00:07:34.172 "supported_io_types": { 00:07:34.172 "read": true, 00:07:34.172 "write": true, 00:07:34.172 "unmap": true, 00:07:34.172 "flush": true, 00:07:34.172 "reset": true, 00:07:34.172 "nvme_admin": false, 00:07:34.172 "nvme_io": false, 00:07:34.172 "nvme_io_md": false, 00:07:34.172 "write_zeroes": true, 00:07:34.172 "zcopy": false, 00:07:34.172 "get_zone_info": false, 00:07:34.172 "zone_management": false, 00:07:34.172 "zone_append": false, 00:07:34.172 "compare": false, 00:07:34.172 "compare_and_write": false, 00:07:34.172 "abort": false, 00:07:34.172 "seek_hole": false, 00:07:34.172 "seek_data": false, 00:07:34.172 "copy": false, 00:07:34.172 "nvme_iov_md": false 00:07:34.172 }, 00:07:34.172 "memory_domains": [ 00:07:34.172 { 00:07:34.172 "dma_device_id": "system", 00:07:34.172 "dma_device_type": 1 00:07:34.172 }, 00:07:34.172 { 00:07:34.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.172 "dma_device_type": 2 00:07:34.172 }, 00:07:34.172 { 00:07:34.172 "dma_device_id": "system", 00:07:34.172 "dma_device_type": 1 00:07:34.172 }, 00:07:34.172 { 00:07:34.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.172 "dma_device_type": 2 00:07:34.172 } 00:07:34.172 ], 00:07:34.172 "driver_specific": { 00:07:34.172 "raid": { 00:07:34.172 "uuid": "6e32c2b3-4549-4aa3-80e7-da429cc89ae8", 00:07:34.172 "strip_size_kb": 64, 00:07:34.172 "state": "online", 00:07:34.172 "raid_level": "concat", 00:07:34.172 "superblock": true, 00:07:34.172 "num_base_bdevs": 2, 00:07:34.172 "num_base_bdevs_discovered": 2, 00:07:34.172 "num_base_bdevs_operational": 2, 00:07:34.172 "base_bdevs_list": [ 00:07:34.172 { 00:07:34.172 "name": "BaseBdev1", 00:07:34.172 "uuid": "21e0aa99-86c3-47b3-86cd-e2525f61d481", 00:07:34.172 "is_configured": true, 00:07:34.172 "data_offset": 2048, 00:07:34.172 "data_size": 63488 00:07:34.172 }, 00:07:34.172 { 00:07:34.172 "name": "BaseBdev2", 00:07:34.172 "uuid": "60250822-4a17-496c-a6ed-3bd8acbcef5c", 00:07:34.172 "is_configured": true, 00:07:34.172 "data_offset": 2048, 00:07:34.172 "data_size": 63488 00:07:34.172 } 00:07:34.172 ] 00:07:34.172 } 00:07:34.172 } 00:07:34.172 }' 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:34.172 BaseBdev2' 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.172 08:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.172 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.172 [2024-09-28 08:45:12.068294] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.172 [2024-09-28 08:45:12.068335] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.172 [2024-09-28 08:45:12.068388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.431 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.432 "name": "Existed_Raid", 00:07:34.432 "uuid": "6e32c2b3-4549-4aa3-80e7-da429cc89ae8", 00:07:34.432 "strip_size_kb": 64, 00:07:34.432 "state": "offline", 00:07:34.432 "raid_level": "concat", 00:07:34.432 "superblock": true, 00:07:34.432 "num_base_bdevs": 2, 00:07:34.432 "num_base_bdevs_discovered": 1, 00:07:34.432 "num_base_bdevs_operational": 1, 00:07:34.432 "base_bdevs_list": [ 00:07:34.432 { 00:07:34.432 "name": null, 00:07:34.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.432 "is_configured": false, 00:07:34.432 "data_offset": 0, 00:07:34.432 "data_size": 63488 00:07:34.432 }, 00:07:34.432 { 00:07:34.432 "name": "BaseBdev2", 00:07:34.432 "uuid": "60250822-4a17-496c-a6ed-3bd8acbcef5c", 00:07:34.432 "is_configured": true, 00:07:34.432 "data_offset": 2048, 00:07:34.432 "data_size": 63488 00:07:34.432 } 00:07:34.432 ] 00:07:34.432 }' 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.432 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.691 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.691 [2024-09-28 08:45:12.638749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.691 [2024-09-28 08:45:12.638811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61947 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61947 ']' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61947 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61947 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61947' 00:07:34.950 killing process with pid 61947 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61947 00:07:34.950 [2024-09-28 08:45:12.826100] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.950 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61947 00:07:34.950 [2024-09-28 08:45:12.843898] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.328 ************************************ 00:07:36.328 END TEST raid_state_function_test_sb 00:07:36.328 ************************************ 00:07:36.328 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:36.328 00:07:36.328 real 0m5.146s 00:07:36.328 user 0m7.178s 00:07:36.328 sys 0m0.875s 00:07:36.328 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.328 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.328 08:45:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:36.328 08:45:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:36.328 08:45:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.328 08:45:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.328 ************************************ 00:07:36.328 START TEST raid_superblock_test 00:07:36.328 ************************************ 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:36.328 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62200 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62200 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62200 ']' 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.329 08:45:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.329 [2024-09-28 08:45:14.316788] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:36.329 [2024-09-28 08:45:14.316905] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62200 ] 00:07:36.587 [2024-09-28 08:45:14.478073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.845 [2024-09-28 08:45:14.724874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.105 [2024-09-28 08:45:14.947619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.105 [2024-09-28 08:45:14.947659] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.365 malloc1 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.365 [2024-09-28 08:45:15.177684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:37.365 [2024-09-28 08:45:15.177797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.365 [2024-09-28 08:45:15.177843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:37.365 [2024-09-28 08:45:15.177878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.365 [2024-09-28 08:45:15.180336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.365 [2024-09-28 08:45:15.180405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:37.365 pt1 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.365 malloc2 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.365 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.365 [2024-09-28 08:45:15.256899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.365 [2024-09-28 08:45:15.256960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.365 [2024-09-28 08:45:15.256985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:37.365 [2024-09-28 08:45:15.256994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.365 [2024-09-28 08:45:15.259510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.365 [2024-09-28 08:45:15.259543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.365 pt2 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.366 [2024-09-28 08:45:15.264964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:37.366 [2024-09-28 08:45:15.267104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.366 [2024-09-28 08:45:15.267272] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:37.366 [2024-09-28 08:45:15.267285] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.366 [2024-09-28 08:45:15.267517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.366 [2024-09-28 08:45:15.267676] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:37.366 [2024-09-28 08:45:15.267692] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:37.366 [2024-09-28 08:45:15.267873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.366 "name": "raid_bdev1", 00:07:37.366 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:37.366 "strip_size_kb": 64, 00:07:37.366 "state": "online", 00:07:37.366 "raid_level": "concat", 00:07:37.366 "superblock": true, 00:07:37.366 "num_base_bdevs": 2, 00:07:37.366 "num_base_bdevs_discovered": 2, 00:07:37.366 "num_base_bdevs_operational": 2, 00:07:37.366 "base_bdevs_list": [ 00:07:37.366 { 00:07:37.366 "name": "pt1", 00:07:37.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.366 "is_configured": true, 00:07:37.366 "data_offset": 2048, 00:07:37.366 "data_size": 63488 00:07:37.366 }, 00:07:37.366 { 00:07:37.366 "name": "pt2", 00:07:37.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.366 "is_configured": true, 00:07:37.366 "data_offset": 2048, 00:07:37.366 "data_size": 63488 00:07:37.366 } 00:07:37.366 ] 00:07:37.366 }' 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.366 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.934 [2024-09-28 08:45:15.740406] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.934 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.934 "name": "raid_bdev1", 00:07:37.934 "aliases": [ 00:07:37.934 "c52f9c3a-058b-479e-b7bb-994b3b4bc511" 00:07:37.934 ], 00:07:37.934 "product_name": "Raid Volume", 00:07:37.934 "block_size": 512, 00:07:37.934 "num_blocks": 126976, 00:07:37.934 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:37.934 "assigned_rate_limits": { 00:07:37.934 "rw_ios_per_sec": 0, 00:07:37.934 "rw_mbytes_per_sec": 0, 00:07:37.934 "r_mbytes_per_sec": 0, 00:07:37.934 "w_mbytes_per_sec": 0 00:07:37.934 }, 00:07:37.934 "claimed": false, 00:07:37.934 "zoned": false, 00:07:37.934 "supported_io_types": { 00:07:37.934 "read": true, 00:07:37.934 "write": true, 00:07:37.934 "unmap": true, 00:07:37.934 "flush": true, 00:07:37.934 "reset": true, 00:07:37.934 "nvme_admin": false, 00:07:37.934 "nvme_io": false, 00:07:37.934 "nvme_io_md": false, 00:07:37.934 "write_zeroes": true, 00:07:37.935 "zcopy": false, 00:07:37.935 "get_zone_info": false, 00:07:37.935 "zone_management": false, 00:07:37.935 "zone_append": false, 00:07:37.935 "compare": false, 00:07:37.935 "compare_and_write": false, 00:07:37.935 "abort": false, 00:07:37.935 "seek_hole": false, 00:07:37.935 "seek_data": false, 00:07:37.935 "copy": false, 00:07:37.935 "nvme_iov_md": false 00:07:37.935 }, 00:07:37.935 "memory_domains": [ 00:07:37.935 { 00:07:37.935 "dma_device_id": "system", 00:07:37.935 "dma_device_type": 1 00:07:37.935 }, 00:07:37.935 { 00:07:37.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.935 "dma_device_type": 2 00:07:37.935 }, 00:07:37.935 { 00:07:37.935 "dma_device_id": "system", 00:07:37.935 "dma_device_type": 1 00:07:37.935 }, 00:07:37.935 { 00:07:37.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.935 "dma_device_type": 2 00:07:37.935 } 00:07:37.935 ], 00:07:37.935 "driver_specific": { 00:07:37.935 "raid": { 00:07:37.935 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:37.935 "strip_size_kb": 64, 00:07:37.935 "state": "online", 00:07:37.935 "raid_level": "concat", 00:07:37.935 "superblock": true, 00:07:37.935 "num_base_bdevs": 2, 00:07:37.935 "num_base_bdevs_discovered": 2, 00:07:37.935 "num_base_bdevs_operational": 2, 00:07:37.935 "base_bdevs_list": [ 00:07:37.935 { 00:07:37.935 "name": "pt1", 00:07:37.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.935 "is_configured": true, 00:07:37.935 "data_offset": 2048, 00:07:37.935 "data_size": 63488 00:07:37.935 }, 00:07:37.935 { 00:07:37.935 "name": "pt2", 00:07:37.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.935 "is_configured": true, 00:07:37.935 "data_offset": 2048, 00:07:37.935 "data_size": 63488 00:07:37.935 } 00:07:37.935 ] 00:07:37.935 } 00:07:37.935 } 00:07:37.935 }' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.935 pt2' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.935 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.194 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.194 [2024-09-28 08:45:15.963942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.195 08:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c52f9c3a-058b-479e-b7bb-994b3b4bc511 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c52f9c3a-058b-479e-b7bb-994b3b4bc511 ']' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 [2024-09-28 08:45:16.007610] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.195 [2024-09-28 08:45:16.007693] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.195 [2024-09-28 08:45:16.007793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.195 [2024-09-28 08:45:16.007843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.195 [2024-09-28 08:45:16.007859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 [2024-09-28 08:45:16.127420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:38.195 [2024-09-28 08:45:16.129560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:38.195 [2024-09-28 08:45:16.129626] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:38.195 [2024-09-28 08:45:16.129710] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:38.195 [2024-09-28 08:45:16.129728] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.195 [2024-09-28 08:45:16.129739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:38.195 request: 00:07:38.195 { 00:07:38.195 "name": "raid_bdev1", 00:07:38.195 "raid_level": "concat", 00:07:38.195 "base_bdevs": [ 00:07:38.195 "malloc1", 00:07:38.195 "malloc2" 00:07:38.195 ], 00:07:38.195 "strip_size_kb": 64, 00:07:38.195 "superblock": false, 00:07:38.195 "method": "bdev_raid_create", 00:07:38.195 "req_id": 1 00:07:38.195 } 00:07:38.195 Got JSON-RPC error response 00:07:38.195 response: 00:07:38.195 { 00:07:38.195 "code": -17, 00:07:38.195 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:38.195 } 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.195 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.455 [2024-09-28 08:45:16.191281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.455 [2024-09-28 08:45:16.191371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.455 [2024-09-28 08:45:16.191423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:38.455 [2024-09-28 08:45:16.191458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.455 [2024-09-28 08:45:16.193939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.455 [2024-09-28 08:45:16.194025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.455 [2024-09-28 08:45:16.194120] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:38.455 [2024-09-28 08:45:16.194203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:38.455 pt1 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.455 "name": "raid_bdev1", 00:07:38.455 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:38.455 "strip_size_kb": 64, 00:07:38.455 "state": "configuring", 00:07:38.455 "raid_level": "concat", 00:07:38.455 "superblock": true, 00:07:38.455 "num_base_bdevs": 2, 00:07:38.455 "num_base_bdevs_discovered": 1, 00:07:38.455 "num_base_bdevs_operational": 2, 00:07:38.455 "base_bdevs_list": [ 00:07:38.455 { 00:07:38.455 "name": "pt1", 00:07:38.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.455 "is_configured": true, 00:07:38.455 "data_offset": 2048, 00:07:38.455 "data_size": 63488 00:07:38.455 }, 00:07:38.455 { 00:07:38.455 "name": null, 00:07:38.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.455 "is_configured": false, 00:07:38.455 "data_offset": 2048, 00:07:38.455 "data_size": 63488 00:07:38.455 } 00:07:38.455 ] 00:07:38.455 }' 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.455 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 [2024-09-28 08:45:16.626582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:38.714 [2024-09-28 08:45:16.626736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.714 [2024-09-28 08:45:16.626767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:38.714 [2024-09-28 08:45:16.626779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.714 [2024-09-28 08:45:16.627317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.714 [2024-09-28 08:45:16.627341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:38.715 [2024-09-28 08:45:16.627429] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:38.715 [2024-09-28 08:45:16.627454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:38.715 [2024-09-28 08:45:16.627579] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.715 [2024-09-28 08:45:16.627590] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:38.715 [2024-09-28 08:45:16.627859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:38.715 [2024-09-28 08:45:16.628017] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.715 [2024-09-28 08:45:16.628027] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:38.715 [2024-09-28 08:45:16.628154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.715 pt2 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.715 "name": "raid_bdev1", 00:07:38.715 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:38.715 "strip_size_kb": 64, 00:07:38.715 "state": "online", 00:07:38.715 "raid_level": "concat", 00:07:38.715 "superblock": true, 00:07:38.715 "num_base_bdevs": 2, 00:07:38.715 "num_base_bdevs_discovered": 2, 00:07:38.715 "num_base_bdevs_operational": 2, 00:07:38.715 "base_bdevs_list": [ 00:07:38.715 { 00:07:38.715 "name": "pt1", 00:07:38.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.715 "is_configured": true, 00:07:38.715 "data_offset": 2048, 00:07:38.715 "data_size": 63488 00:07:38.715 }, 00:07:38.715 { 00:07:38.715 "name": "pt2", 00:07:38.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.715 "is_configured": true, 00:07:38.715 "data_offset": 2048, 00:07:38.715 "data_size": 63488 00:07:38.715 } 00:07:38.715 ] 00:07:38.715 }' 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.715 08:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.283 [2024-09-28 08:45:17.046150] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.283 "name": "raid_bdev1", 00:07:39.283 "aliases": [ 00:07:39.283 "c52f9c3a-058b-479e-b7bb-994b3b4bc511" 00:07:39.283 ], 00:07:39.283 "product_name": "Raid Volume", 00:07:39.283 "block_size": 512, 00:07:39.283 "num_blocks": 126976, 00:07:39.283 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:39.283 "assigned_rate_limits": { 00:07:39.283 "rw_ios_per_sec": 0, 00:07:39.283 "rw_mbytes_per_sec": 0, 00:07:39.283 "r_mbytes_per_sec": 0, 00:07:39.283 "w_mbytes_per_sec": 0 00:07:39.283 }, 00:07:39.283 "claimed": false, 00:07:39.283 "zoned": false, 00:07:39.283 "supported_io_types": { 00:07:39.283 "read": true, 00:07:39.283 "write": true, 00:07:39.283 "unmap": true, 00:07:39.283 "flush": true, 00:07:39.283 "reset": true, 00:07:39.283 "nvme_admin": false, 00:07:39.283 "nvme_io": false, 00:07:39.283 "nvme_io_md": false, 00:07:39.283 "write_zeroes": true, 00:07:39.283 "zcopy": false, 00:07:39.283 "get_zone_info": false, 00:07:39.283 "zone_management": false, 00:07:39.283 "zone_append": false, 00:07:39.283 "compare": false, 00:07:39.283 "compare_and_write": false, 00:07:39.283 "abort": false, 00:07:39.283 "seek_hole": false, 00:07:39.283 "seek_data": false, 00:07:39.283 "copy": false, 00:07:39.283 "nvme_iov_md": false 00:07:39.283 }, 00:07:39.283 "memory_domains": [ 00:07:39.283 { 00:07:39.283 "dma_device_id": "system", 00:07:39.283 "dma_device_type": 1 00:07:39.283 }, 00:07:39.283 { 00:07:39.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.283 "dma_device_type": 2 00:07:39.283 }, 00:07:39.283 { 00:07:39.283 "dma_device_id": "system", 00:07:39.283 "dma_device_type": 1 00:07:39.283 }, 00:07:39.283 { 00:07:39.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.283 "dma_device_type": 2 00:07:39.283 } 00:07:39.283 ], 00:07:39.283 "driver_specific": { 00:07:39.283 "raid": { 00:07:39.283 "uuid": "c52f9c3a-058b-479e-b7bb-994b3b4bc511", 00:07:39.283 "strip_size_kb": 64, 00:07:39.283 "state": "online", 00:07:39.283 "raid_level": "concat", 00:07:39.283 "superblock": true, 00:07:39.283 "num_base_bdevs": 2, 00:07:39.283 "num_base_bdevs_discovered": 2, 00:07:39.283 "num_base_bdevs_operational": 2, 00:07:39.283 "base_bdevs_list": [ 00:07:39.283 { 00:07:39.283 "name": "pt1", 00:07:39.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.283 "is_configured": true, 00:07:39.283 "data_offset": 2048, 00:07:39.283 "data_size": 63488 00:07:39.283 }, 00:07:39.283 { 00:07:39.283 "name": "pt2", 00:07:39.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.283 "is_configured": true, 00:07:39.283 "data_offset": 2048, 00:07:39.283 "data_size": 63488 00:07:39.283 } 00:07:39.283 ] 00:07:39.283 } 00:07:39.283 } 00:07:39.283 }' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:39.283 pt2' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.283 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.283 [2024-09-28 08:45:17.261733] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c52f9c3a-058b-479e-b7bb-994b3b4bc511 '!=' c52f9c3a-058b-479e-b7bb-994b3b4bc511 ']' 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62200 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62200 ']' 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62200 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62200 00:07:39.543 killing process with pid 62200 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62200' 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62200 00:07:39.543 [2024-09-28 08:45:17.351433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.543 [2024-09-28 08:45:17.351535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.543 [2024-09-28 08:45:17.351589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.543 08:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62200 00:07:39.543 [2024-09-28 08:45:17.351601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:39.802 [2024-09-28 08:45:17.564267] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.225 08:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:41.225 ************************************ 00:07:41.225 END TEST raid_superblock_test 00:07:41.225 ************************************ 00:07:41.225 00:07:41.225 real 0m4.659s 00:07:41.225 user 0m6.303s 00:07:41.225 sys 0m0.861s 00:07:41.225 08:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.225 08:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 08:45:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:41.225 08:45:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.225 08:45:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.225 08:45:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 ************************************ 00:07:41.225 START TEST raid_read_error_test 00:07:41.225 ************************************ 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.225 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0ZV3EP2mh4 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62406 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62406 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62406 ']' 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.226 08:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.226 [2024-09-28 08:45:19.054771] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:41.226 [2024-09-28 08:45:19.054878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62406 ] 00:07:41.226 [2024-09-28 08:45:19.218807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.484 [2024-09-28 08:45:19.473887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.743 [2024-09-28 08:45:19.702863] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.743 [2024-09-28 08:45:19.702909] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.002 BaseBdev1_malloc 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.002 true 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.002 08:45:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.261 [2024-09-28 08:45:19.997450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.261 [2024-09-28 08:45:19.997510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.261 [2024-09-28 08:45:19.997530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.261 [2024-09-28 08:45:19.997541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.261 [2024-09-28 08:45:19.999992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.261 [2024-09-28 08:45:20.000031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.261 BaseBdev1 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.261 BaseBdev2_malloc 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.261 true 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.261 [2024-09-28 08:45:20.098871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.261 [2024-09-28 08:45:20.098934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.261 [2024-09-28 08:45:20.098968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.261 [2024-09-28 08:45:20.098979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.261 [2024-09-28 08:45:20.101323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.261 [2024-09-28 08:45:20.101373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.261 BaseBdev2 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.261 [2024-09-28 08:45:20.110905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.261 [2024-09-28 08:45:20.113017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.261 [2024-09-28 08:45:20.113210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.261 [2024-09-28 08:45:20.113225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.261 [2024-09-28 08:45:20.113451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.261 [2024-09-28 08:45:20.113614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.261 [2024-09-28 08:45:20.113624] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:42.261 [2024-09-28 08:45:20.113811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.261 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.262 "name": "raid_bdev1", 00:07:42.262 "uuid": "97fdcd83-21de-4fa5-a037-49fc67094c70", 00:07:42.262 "strip_size_kb": 64, 00:07:42.262 "state": "online", 00:07:42.262 "raid_level": "concat", 00:07:42.262 "superblock": true, 00:07:42.262 "num_base_bdevs": 2, 00:07:42.262 "num_base_bdevs_discovered": 2, 00:07:42.262 "num_base_bdevs_operational": 2, 00:07:42.262 "base_bdevs_list": [ 00:07:42.262 { 00:07:42.262 "name": "BaseBdev1", 00:07:42.262 "uuid": "c820cdd3-e368-5a05-8712-ad1c550ec0dc", 00:07:42.262 "is_configured": true, 00:07:42.262 "data_offset": 2048, 00:07:42.262 "data_size": 63488 00:07:42.262 }, 00:07:42.262 { 00:07:42.262 "name": "BaseBdev2", 00:07:42.262 "uuid": "5dd5994d-7281-595a-a1aa-f38cd0c596e9", 00:07:42.262 "is_configured": true, 00:07:42.262 "data_offset": 2048, 00:07:42.262 "data_size": 63488 00:07:42.262 } 00:07:42.262 ] 00:07:42.262 }' 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.262 08:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.829 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:42.829 08:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:42.829 [2024-09-28 08:45:20.603597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.765 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.766 "name": "raid_bdev1", 00:07:43.766 "uuid": "97fdcd83-21de-4fa5-a037-49fc67094c70", 00:07:43.766 "strip_size_kb": 64, 00:07:43.766 "state": "online", 00:07:43.766 "raid_level": "concat", 00:07:43.766 "superblock": true, 00:07:43.766 "num_base_bdevs": 2, 00:07:43.766 "num_base_bdevs_discovered": 2, 00:07:43.766 "num_base_bdevs_operational": 2, 00:07:43.766 "base_bdevs_list": [ 00:07:43.766 { 00:07:43.766 "name": "BaseBdev1", 00:07:43.766 "uuid": "c820cdd3-e368-5a05-8712-ad1c550ec0dc", 00:07:43.766 "is_configured": true, 00:07:43.766 "data_offset": 2048, 00:07:43.766 "data_size": 63488 00:07:43.766 }, 00:07:43.766 { 00:07:43.766 "name": "BaseBdev2", 00:07:43.766 "uuid": "5dd5994d-7281-595a-a1aa-f38cd0c596e9", 00:07:43.766 "is_configured": true, 00:07:43.766 "data_offset": 2048, 00:07:43.766 "data_size": 63488 00:07:43.766 } 00:07:43.766 ] 00:07:43.766 }' 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.766 08:45:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.025 [2024-09-28 08:45:22.008217] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.025 [2024-09-28 08:45:22.008330] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.025 [2024-09-28 08:45:22.010940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.025 [2024-09-28 08:45:22.010984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.025 [2024-09-28 08:45:22.011019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.025 [2024-09-28 08:45:22.011031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.025 { 00:07:44.025 "results": [ 00:07:44.025 { 00:07:44.025 "job": "raid_bdev1", 00:07:44.025 "core_mask": "0x1", 00:07:44.025 "workload": "randrw", 00:07:44.025 "percentage": 50, 00:07:44.025 "status": "finished", 00:07:44.025 "queue_depth": 1, 00:07:44.025 "io_size": 131072, 00:07:44.025 "runtime": 1.405322, 00:07:44.025 "iops": 15160.226624218507, 00:07:44.025 "mibps": 1895.0283280273134, 00:07:44.025 "io_failed": 1, 00:07:44.025 "io_timeout": 0, 00:07:44.025 "avg_latency_us": 92.60653509251961, 00:07:44.025 "min_latency_us": 24.370305676855896, 00:07:44.025 "max_latency_us": 1459.5353711790392 00:07:44.025 } 00:07:44.025 ], 00:07:44.025 "core_count": 1 00:07:44.025 } 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62406 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62406 ']' 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62406 00:07:44.025 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62406 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62406' 00:07:44.284 killing process with pid 62406 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62406 00:07:44.284 [2024-09-28 08:45:22.055911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.284 08:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62406 00:07:44.284 [2024-09-28 08:45:22.196947] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0ZV3EP2mh4 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:45.661 00:07:45.661 real 0m4.632s 00:07:45.661 user 0m5.400s 00:07:45.661 sys 0m0.641s 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.661 08:45:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.661 ************************************ 00:07:45.661 END TEST raid_read_error_test 00:07:45.661 ************************************ 00:07:45.661 08:45:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:45.661 08:45:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.661 08:45:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.661 08:45:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.920 ************************************ 00:07:45.920 START TEST raid_write_error_test 00:07:45.920 ************************************ 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.woCfwdlcm9 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62552 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62552 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62552 ']' 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.920 08:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.920 [2024-09-28 08:45:23.768190] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:45.920 [2024-09-28 08:45:23.768302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62552 ] 00:07:46.179 [2024-09-28 08:45:23.936780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.438 [2024-09-28 08:45:24.176909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.438 [2024-09-28 08:45:24.391768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.438 [2024-09-28 08:45:24.391809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.697 BaseBdev1_malloc 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.697 true 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.697 [2024-09-28 08:45:24.638904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.697 [2024-09-28 08:45:24.638960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.697 [2024-09-28 08:45:24.638993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.697 [2024-09-28 08:45:24.639004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.697 [2024-09-28 08:45:24.641402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.697 [2024-09-28 08:45:24.641441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.697 BaseBdev1 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.697 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 BaseBdev2_malloc 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 true 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 [2024-09-28 08:45:24.738071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.955 [2024-09-28 08:45:24.738124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.955 [2024-09-28 08:45:24.738158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.955 [2024-09-28 08:45:24.738168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.955 [2024-09-28 08:45:24.740577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.955 [2024-09-28 08:45:24.740614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.955 BaseBdev2 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.955 [2024-09-28 08:45:24.750128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.955 [2024-09-28 08:45:24.752173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.955 [2024-09-28 08:45:24.752372] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.955 [2024-09-28 08:45:24.752387] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.955 [2024-09-28 08:45:24.752629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.955 [2024-09-28 08:45:24.752893] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.955 [2024-09-28 08:45:24.752926] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:46.955 [2024-09-28 08:45:24.753092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.955 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.956 "name": "raid_bdev1", 00:07:46.956 "uuid": "a3baff27-f35e-4263-a970-91800a538fca", 00:07:46.956 "strip_size_kb": 64, 00:07:46.956 "state": "online", 00:07:46.956 "raid_level": "concat", 00:07:46.956 "superblock": true, 00:07:46.956 "num_base_bdevs": 2, 00:07:46.956 "num_base_bdevs_discovered": 2, 00:07:46.956 "num_base_bdevs_operational": 2, 00:07:46.956 "base_bdevs_list": [ 00:07:46.956 { 00:07:46.956 "name": "BaseBdev1", 00:07:46.956 "uuid": "26467357-e249-576a-bec3-717a6d28241d", 00:07:46.956 "is_configured": true, 00:07:46.956 "data_offset": 2048, 00:07:46.956 "data_size": 63488 00:07:46.956 }, 00:07:46.956 { 00:07:46.956 "name": "BaseBdev2", 00:07:46.956 "uuid": "89ffa27a-2360-557d-a947-829af86bdb48", 00:07:46.956 "is_configured": true, 00:07:46.956 "data_offset": 2048, 00:07:46.956 "data_size": 63488 00:07:46.956 } 00:07:46.956 ] 00:07:46.956 }' 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.956 08:45:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.213 08:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.213 08:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.472 [2024-09-28 08:45:25.278619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.419 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.420 "name": "raid_bdev1", 00:07:48.420 "uuid": "a3baff27-f35e-4263-a970-91800a538fca", 00:07:48.420 "strip_size_kb": 64, 00:07:48.420 "state": "online", 00:07:48.420 "raid_level": "concat", 00:07:48.420 "superblock": true, 00:07:48.420 "num_base_bdevs": 2, 00:07:48.420 "num_base_bdevs_discovered": 2, 00:07:48.420 "num_base_bdevs_operational": 2, 00:07:48.420 "base_bdevs_list": [ 00:07:48.420 { 00:07:48.420 "name": "BaseBdev1", 00:07:48.420 "uuid": "26467357-e249-576a-bec3-717a6d28241d", 00:07:48.420 "is_configured": true, 00:07:48.420 "data_offset": 2048, 00:07:48.420 "data_size": 63488 00:07:48.420 }, 00:07:48.420 { 00:07:48.420 "name": "BaseBdev2", 00:07:48.420 "uuid": "89ffa27a-2360-557d-a947-829af86bdb48", 00:07:48.420 "is_configured": true, 00:07:48.420 "data_offset": 2048, 00:07:48.420 "data_size": 63488 00:07:48.420 } 00:07:48.420 ] 00:07:48.420 }' 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.420 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.679 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.679 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.938 [2024-09-28 08:45:26.679339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.938 [2024-09-28 08:45:26.679461] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.938 [2024-09-28 08:45:26.682125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.938 [2024-09-28 08:45:26.682229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.938 [2024-09-28 08:45:26.682285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.938 [2024-09-28 08:45:26.682327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.938 { 00:07:48.938 "results": [ 00:07:48.938 { 00:07:48.938 "job": "raid_bdev1", 00:07:48.938 "core_mask": "0x1", 00:07:48.938 "workload": "randrw", 00:07:48.938 "percentage": 50, 00:07:48.938 "status": "finished", 00:07:48.938 "queue_depth": 1, 00:07:48.938 "io_size": 131072, 00:07:48.938 "runtime": 1.401451, 00:07:48.938 "iops": 15160.002026471137, 00:07:48.938 "mibps": 1895.000253308892, 00:07:48.938 "io_failed": 1, 00:07:48.938 "io_timeout": 0, 00:07:48.938 "avg_latency_us": 92.60789922974998, 00:07:48.938 "min_latency_us": 24.370305676855896, 00:07:48.938 "max_latency_us": 1387.989519650655 00:07:48.938 } 00:07:48.938 ], 00:07:48.938 "core_count": 1 00:07:48.938 } 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62552 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62552 ']' 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62552 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:48.938 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62552 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62552' 00:07:48.939 killing process with pid 62552 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62552 00:07:48.939 [2024-09-28 08:45:26.728363] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.939 08:45:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62552 00:07:48.939 [2024-09-28 08:45:26.876586] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.woCfwdlcm9 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:50.318 ************************************ 00:07:50.318 END TEST raid_write_error_test 00:07:50.318 ************************************ 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.318 08:45:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:50.318 00:07:50.318 real 0m4.590s 00:07:50.318 user 0m5.347s 00:07:50.318 sys 0m0.643s 00:07:50.319 08:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.319 08:45:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.319 08:45:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:50.319 08:45:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:50.319 08:45:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:50.319 08:45:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.319 08:45:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.579 ************************************ 00:07:50.579 START TEST raid_state_function_test 00:07:50.579 ************************************ 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:50.579 Process raid pid: 62695 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62695 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62695' 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62695 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62695 ']' 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.579 08:45:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.579 [2024-09-28 08:45:28.427130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:50.579 [2024-09-28 08:45:28.427378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.839 [2024-09-28 08:45:28.597316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.098 [2024-09-28 08:45:28.838757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.098 [2024-09-28 08:45:29.075818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.098 [2024-09-28 08:45:29.075855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.357 [2024-09-28 08:45:29.259995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.357 [2024-09-28 08:45:29.260097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.357 [2024-09-28 08:45:29.260128] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.357 [2024-09-28 08:45:29.260152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.357 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.357 "name": "Existed_Raid", 00:07:51.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.357 "strip_size_kb": 0, 00:07:51.357 "state": "configuring", 00:07:51.357 "raid_level": "raid1", 00:07:51.357 "superblock": false, 00:07:51.357 "num_base_bdevs": 2, 00:07:51.357 "num_base_bdevs_discovered": 0, 00:07:51.357 "num_base_bdevs_operational": 2, 00:07:51.358 "base_bdevs_list": [ 00:07:51.358 { 00:07:51.358 "name": "BaseBdev1", 00:07:51.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.358 "is_configured": false, 00:07:51.358 "data_offset": 0, 00:07:51.358 "data_size": 0 00:07:51.358 }, 00:07:51.358 { 00:07:51.358 "name": "BaseBdev2", 00:07:51.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.358 "is_configured": false, 00:07:51.358 "data_offset": 0, 00:07:51.358 "data_size": 0 00:07:51.358 } 00:07:51.358 ] 00:07:51.358 }' 00:07:51.358 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.358 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.926 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.926 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.926 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.926 [2024-09-28 08:45:29.679253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.926 [2024-09-28 08:45:29.679333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.926 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.927 [2024-09-28 08:45:29.687261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.927 [2024-09-28 08:45:29.687352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.927 [2024-09-28 08:45:29.687382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.927 [2024-09-28 08:45:29.687409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.927 [2024-09-28 08:45:29.770383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.927 BaseBdev1 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.927 [ 00:07:51.927 { 00:07:51.927 "name": "BaseBdev1", 00:07:51.927 "aliases": [ 00:07:51.927 "fe11baac-221e-40b5-9b1a-fba10349c814" 00:07:51.927 ], 00:07:51.927 "product_name": "Malloc disk", 00:07:51.927 "block_size": 512, 00:07:51.927 "num_blocks": 65536, 00:07:51.927 "uuid": "fe11baac-221e-40b5-9b1a-fba10349c814", 00:07:51.927 "assigned_rate_limits": { 00:07:51.927 "rw_ios_per_sec": 0, 00:07:51.927 "rw_mbytes_per_sec": 0, 00:07:51.927 "r_mbytes_per_sec": 0, 00:07:51.927 "w_mbytes_per_sec": 0 00:07:51.927 }, 00:07:51.927 "claimed": true, 00:07:51.927 "claim_type": "exclusive_write", 00:07:51.927 "zoned": false, 00:07:51.927 "supported_io_types": { 00:07:51.927 "read": true, 00:07:51.927 "write": true, 00:07:51.927 "unmap": true, 00:07:51.927 "flush": true, 00:07:51.927 "reset": true, 00:07:51.927 "nvme_admin": false, 00:07:51.927 "nvme_io": false, 00:07:51.927 "nvme_io_md": false, 00:07:51.927 "write_zeroes": true, 00:07:51.927 "zcopy": true, 00:07:51.927 "get_zone_info": false, 00:07:51.927 "zone_management": false, 00:07:51.927 "zone_append": false, 00:07:51.927 "compare": false, 00:07:51.927 "compare_and_write": false, 00:07:51.927 "abort": true, 00:07:51.927 "seek_hole": false, 00:07:51.927 "seek_data": false, 00:07:51.927 "copy": true, 00:07:51.927 "nvme_iov_md": false 00:07:51.927 }, 00:07:51.927 "memory_domains": [ 00:07:51.927 { 00:07:51.927 "dma_device_id": "system", 00:07:51.927 "dma_device_type": 1 00:07:51.927 }, 00:07:51.927 { 00:07:51.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.927 "dma_device_type": 2 00:07:51.927 } 00:07:51.927 ], 00:07:51.927 "driver_specific": {} 00:07:51.927 } 00:07:51.927 ] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.927 "name": "Existed_Raid", 00:07:51.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.927 "strip_size_kb": 0, 00:07:51.927 "state": "configuring", 00:07:51.927 "raid_level": "raid1", 00:07:51.927 "superblock": false, 00:07:51.927 "num_base_bdevs": 2, 00:07:51.927 "num_base_bdevs_discovered": 1, 00:07:51.927 "num_base_bdevs_operational": 2, 00:07:51.927 "base_bdevs_list": [ 00:07:51.927 { 00:07:51.927 "name": "BaseBdev1", 00:07:51.927 "uuid": "fe11baac-221e-40b5-9b1a-fba10349c814", 00:07:51.927 "is_configured": true, 00:07:51.927 "data_offset": 0, 00:07:51.927 "data_size": 65536 00:07:51.927 }, 00:07:51.927 { 00:07:51.927 "name": "BaseBdev2", 00:07:51.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.927 "is_configured": false, 00:07:51.927 "data_offset": 0, 00:07:51.927 "data_size": 0 00:07:51.927 } 00:07:51.927 ] 00:07:51.927 }' 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.927 08:45:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 [2024-09-28 08:45:30.253604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.496 [2024-09-28 08:45:30.253704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 [2024-09-28 08:45:30.265582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.496 [2024-09-28 08:45:30.267771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.496 [2024-09-28 08:45:30.267847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.496 "name": "Existed_Raid", 00:07:52.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.496 "strip_size_kb": 0, 00:07:52.496 "state": "configuring", 00:07:52.496 "raid_level": "raid1", 00:07:52.496 "superblock": false, 00:07:52.496 "num_base_bdevs": 2, 00:07:52.496 "num_base_bdevs_discovered": 1, 00:07:52.496 "num_base_bdevs_operational": 2, 00:07:52.496 "base_bdevs_list": [ 00:07:52.496 { 00:07:52.496 "name": "BaseBdev1", 00:07:52.496 "uuid": "fe11baac-221e-40b5-9b1a-fba10349c814", 00:07:52.496 "is_configured": true, 00:07:52.496 "data_offset": 0, 00:07:52.496 "data_size": 65536 00:07:52.496 }, 00:07:52.496 { 00:07:52.496 "name": "BaseBdev2", 00:07:52.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.496 "is_configured": false, 00:07:52.496 "data_offset": 0, 00:07:52.496 "data_size": 0 00:07:52.496 } 00:07:52.496 ] 00:07:52.496 }' 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.496 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.756 [2024-09-28 08:45:30.723899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.756 [2024-09-28 08:45:30.723955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.756 [2024-09-28 08:45:30.723967] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:52.756 [2024-09-28 08:45:30.724258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.756 [2024-09-28 08:45:30.724432] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.756 [2024-09-28 08:45:30.724445] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.756 [2024-09-28 08:45:30.724766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.756 BaseBdev2 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.756 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.756 [ 00:07:52.756 { 00:07:52.756 "name": "BaseBdev2", 00:07:52.756 "aliases": [ 00:07:52.756 "b264c840-ff26-4bb5-a0c8-500581d0b2b4" 00:07:52.756 ], 00:07:52.756 "product_name": "Malloc disk", 00:07:53.015 "block_size": 512, 00:07:53.015 "num_blocks": 65536, 00:07:53.015 "uuid": "b264c840-ff26-4bb5-a0c8-500581d0b2b4", 00:07:53.015 "assigned_rate_limits": { 00:07:53.015 "rw_ios_per_sec": 0, 00:07:53.015 "rw_mbytes_per_sec": 0, 00:07:53.015 "r_mbytes_per_sec": 0, 00:07:53.015 "w_mbytes_per_sec": 0 00:07:53.015 }, 00:07:53.015 "claimed": true, 00:07:53.015 "claim_type": "exclusive_write", 00:07:53.015 "zoned": false, 00:07:53.015 "supported_io_types": { 00:07:53.015 "read": true, 00:07:53.015 "write": true, 00:07:53.015 "unmap": true, 00:07:53.015 "flush": true, 00:07:53.015 "reset": true, 00:07:53.015 "nvme_admin": false, 00:07:53.015 "nvme_io": false, 00:07:53.015 "nvme_io_md": false, 00:07:53.015 "write_zeroes": true, 00:07:53.015 "zcopy": true, 00:07:53.015 "get_zone_info": false, 00:07:53.015 "zone_management": false, 00:07:53.015 "zone_append": false, 00:07:53.015 "compare": false, 00:07:53.015 "compare_and_write": false, 00:07:53.015 "abort": true, 00:07:53.015 "seek_hole": false, 00:07:53.015 "seek_data": false, 00:07:53.015 "copy": true, 00:07:53.015 "nvme_iov_md": false 00:07:53.015 }, 00:07:53.015 "memory_domains": [ 00:07:53.015 { 00:07:53.015 "dma_device_id": "system", 00:07:53.015 "dma_device_type": 1 00:07:53.015 }, 00:07:53.015 { 00:07:53.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.015 "dma_device_type": 2 00:07:53.015 } 00:07:53.015 ], 00:07:53.015 "driver_specific": {} 00:07:53.015 } 00:07:53.015 ] 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.015 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.015 "name": "Existed_Raid", 00:07:53.016 "uuid": "f15543d7-0196-42c5-9744-2a4f5b07ddb4", 00:07:53.016 "strip_size_kb": 0, 00:07:53.016 "state": "online", 00:07:53.016 "raid_level": "raid1", 00:07:53.016 "superblock": false, 00:07:53.016 "num_base_bdevs": 2, 00:07:53.016 "num_base_bdevs_discovered": 2, 00:07:53.016 "num_base_bdevs_operational": 2, 00:07:53.016 "base_bdevs_list": [ 00:07:53.016 { 00:07:53.016 "name": "BaseBdev1", 00:07:53.016 "uuid": "fe11baac-221e-40b5-9b1a-fba10349c814", 00:07:53.016 "is_configured": true, 00:07:53.016 "data_offset": 0, 00:07:53.016 "data_size": 65536 00:07:53.016 }, 00:07:53.016 { 00:07:53.016 "name": "BaseBdev2", 00:07:53.016 "uuid": "b264c840-ff26-4bb5-a0c8-500581d0b2b4", 00:07:53.016 "is_configured": true, 00:07:53.016 "data_offset": 0, 00:07:53.016 "data_size": 65536 00:07:53.016 } 00:07:53.016 ] 00:07:53.016 }' 00:07:53.016 08:45:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.016 08:45:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.275 [2024-09-28 08:45:31.183433] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.275 "name": "Existed_Raid", 00:07:53.275 "aliases": [ 00:07:53.275 "f15543d7-0196-42c5-9744-2a4f5b07ddb4" 00:07:53.275 ], 00:07:53.275 "product_name": "Raid Volume", 00:07:53.275 "block_size": 512, 00:07:53.275 "num_blocks": 65536, 00:07:53.275 "uuid": "f15543d7-0196-42c5-9744-2a4f5b07ddb4", 00:07:53.275 "assigned_rate_limits": { 00:07:53.275 "rw_ios_per_sec": 0, 00:07:53.275 "rw_mbytes_per_sec": 0, 00:07:53.275 "r_mbytes_per_sec": 0, 00:07:53.275 "w_mbytes_per_sec": 0 00:07:53.275 }, 00:07:53.275 "claimed": false, 00:07:53.275 "zoned": false, 00:07:53.275 "supported_io_types": { 00:07:53.275 "read": true, 00:07:53.275 "write": true, 00:07:53.275 "unmap": false, 00:07:53.275 "flush": false, 00:07:53.275 "reset": true, 00:07:53.275 "nvme_admin": false, 00:07:53.275 "nvme_io": false, 00:07:53.275 "nvme_io_md": false, 00:07:53.275 "write_zeroes": true, 00:07:53.275 "zcopy": false, 00:07:53.275 "get_zone_info": false, 00:07:53.275 "zone_management": false, 00:07:53.275 "zone_append": false, 00:07:53.275 "compare": false, 00:07:53.275 "compare_and_write": false, 00:07:53.275 "abort": false, 00:07:53.275 "seek_hole": false, 00:07:53.275 "seek_data": false, 00:07:53.275 "copy": false, 00:07:53.275 "nvme_iov_md": false 00:07:53.275 }, 00:07:53.275 "memory_domains": [ 00:07:53.275 { 00:07:53.275 "dma_device_id": "system", 00:07:53.275 "dma_device_type": 1 00:07:53.275 }, 00:07:53.275 { 00:07:53.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.275 "dma_device_type": 2 00:07:53.275 }, 00:07:53.275 { 00:07:53.275 "dma_device_id": "system", 00:07:53.275 "dma_device_type": 1 00:07:53.275 }, 00:07:53.275 { 00:07:53.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.275 "dma_device_type": 2 00:07:53.275 } 00:07:53.275 ], 00:07:53.275 "driver_specific": { 00:07:53.275 "raid": { 00:07:53.275 "uuid": "f15543d7-0196-42c5-9744-2a4f5b07ddb4", 00:07:53.275 "strip_size_kb": 0, 00:07:53.275 "state": "online", 00:07:53.275 "raid_level": "raid1", 00:07:53.275 "superblock": false, 00:07:53.275 "num_base_bdevs": 2, 00:07:53.275 "num_base_bdevs_discovered": 2, 00:07:53.275 "num_base_bdevs_operational": 2, 00:07:53.275 "base_bdevs_list": [ 00:07:53.275 { 00:07:53.275 "name": "BaseBdev1", 00:07:53.275 "uuid": "fe11baac-221e-40b5-9b1a-fba10349c814", 00:07:53.275 "is_configured": true, 00:07:53.275 "data_offset": 0, 00:07:53.275 "data_size": 65536 00:07:53.275 }, 00:07:53.275 { 00:07:53.275 "name": "BaseBdev2", 00:07:53.275 "uuid": "b264c840-ff26-4bb5-a0c8-500581d0b2b4", 00:07:53.275 "is_configured": true, 00:07:53.275 "data_offset": 0, 00:07:53.275 "data_size": 65536 00:07:53.275 } 00:07:53.275 ] 00:07:53.275 } 00:07:53.275 } 00:07:53.275 }' 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.275 BaseBdev2' 00:07:53.275 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.535 [2024-09-28 08:45:31.410802] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.535 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.536 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.536 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.536 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.795 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.795 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.795 "name": "Existed_Raid", 00:07:53.795 "uuid": "f15543d7-0196-42c5-9744-2a4f5b07ddb4", 00:07:53.795 "strip_size_kb": 0, 00:07:53.795 "state": "online", 00:07:53.795 "raid_level": "raid1", 00:07:53.795 "superblock": false, 00:07:53.795 "num_base_bdevs": 2, 00:07:53.795 "num_base_bdevs_discovered": 1, 00:07:53.795 "num_base_bdevs_operational": 1, 00:07:53.795 "base_bdevs_list": [ 00:07:53.795 { 00:07:53.795 "name": null, 00:07:53.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.795 "is_configured": false, 00:07:53.795 "data_offset": 0, 00:07:53.795 "data_size": 65536 00:07:53.795 }, 00:07:53.795 { 00:07:53.795 "name": "BaseBdev2", 00:07:53.795 "uuid": "b264c840-ff26-4bb5-a0c8-500581d0b2b4", 00:07:53.795 "is_configured": true, 00:07:53.795 "data_offset": 0, 00:07:53.795 "data_size": 65536 00:07:53.795 } 00:07:53.795 ] 00:07:53.795 }' 00:07:53.795 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.795 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.054 08:45:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.054 [2024-09-28 08:45:31.918252] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.054 [2024-09-28 08:45:31.918362] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.054 [2024-09-28 08:45:32.016992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.054 [2024-09-28 08:45:32.017053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.054 [2024-09-28 08:45:32.017066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.054 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62695 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62695 ']' 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62695 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62695 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62695' 00:07:54.313 killing process with pid 62695 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62695 00:07:54.313 [2024-09-28 08:45:32.092621] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.313 08:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62695 00:07:54.313 [2024-09-28 08:45:32.110067] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.695 ************************************ 00:07:55.695 END TEST raid_state_function_test 00:07:55.695 ************************************ 00:07:55.695 00:07:55.695 real 0m5.112s 00:07:55.695 user 0m7.048s 00:07:55.695 sys 0m0.928s 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.695 08:45:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:55.695 08:45:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:55.695 08:45:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.695 08:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.695 ************************************ 00:07:55.695 START TEST raid_state_function_test_sb 00:07:55.695 ************************************ 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62947 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62947' 00:07:55.695 Process raid pid: 62947 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62947 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62947 ']' 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.695 08:45:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.695 [2024-09-28 08:45:33.609184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:55.695 [2024-09-28 08:45:33.609385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.965 [2024-09-28 08:45:33.777374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.238 [2024-09-28 08:45:34.023288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.498 [2024-09-28 08:45:34.258222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.498 [2024-09-28 08:45:34.258260] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 [2024-09-28 08:45:34.437428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.498 [2024-09-28 08:45:34.437483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.498 [2024-09-28 08:45:34.437493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.498 [2024-09-28 08:45:34.437504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.758 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.758 "name": "Existed_Raid", 00:07:56.758 "uuid": "da798d38-aab3-4527-b9cf-44f83e3a74a2", 00:07:56.758 "strip_size_kb": 0, 00:07:56.758 "state": "configuring", 00:07:56.758 "raid_level": "raid1", 00:07:56.758 "superblock": true, 00:07:56.758 "num_base_bdevs": 2, 00:07:56.758 "num_base_bdevs_discovered": 0, 00:07:56.758 "num_base_bdevs_operational": 2, 00:07:56.758 "base_bdevs_list": [ 00:07:56.758 { 00:07:56.758 "name": "BaseBdev1", 00:07:56.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.758 "is_configured": false, 00:07:56.758 "data_offset": 0, 00:07:56.758 "data_size": 0 00:07:56.758 }, 00:07:56.758 { 00:07:56.758 "name": "BaseBdev2", 00:07:56.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.758 "is_configured": false, 00:07:56.758 "data_offset": 0, 00:07:56.758 "data_size": 0 00:07:56.758 } 00:07:56.758 ] 00:07:56.758 }' 00:07:56.758 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.758 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.017 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.017 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.017 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.017 [2024-09-28 08:45:34.896559] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.018 [2024-09-28 08:45:34.896657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.018 [2024-09-28 08:45:34.908564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.018 [2024-09-28 08:45:34.908668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.018 [2024-09-28 08:45:34.908700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.018 [2024-09-28 08:45:34.908726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.018 [2024-09-28 08:45:34.991603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.018 BaseBdev1 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.018 08:45:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.018 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.018 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.018 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.018 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.277 [ 00:07:57.277 { 00:07:57.277 "name": "BaseBdev1", 00:07:57.277 "aliases": [ 00:07:57.277 "e07f2612-4016-478c-b407-82fe13cb9b01" 00:07:57.277 ], 00:07:57.277 "product_name": "Malloc disk", 00:07:57.277 "block_size": 512, 00:07:57.277 "num_blocks": 65536, 00:07:57.277 "uuid": "e07f2612-4016-478c-b407-82fe13cb9b01", 00:07:57.277 "assigned_rate_limits": { 00:07:57.277 "rw_ios_per_sec": 0, 00:07:57.277 "rw_mbytes_per_sec": 0, 00:07:57.277 "r_mbytes_per_sec": 0, 00:07:57.277 "w_mbytes_per_sec": 0 00:07:57.277 }, 00:07:57.277 "claimed": true, 00:07:57.277 "claim_type": "exclusive_write", 00:07:57.277 "zoned": false, 00:07:57.277 "supported_io_types": { 00:07:57.277 "read": true, 00:07:57.277 "write": true, 00:07:57.277 "unmap": true, 00:07:57.277 "flush": true, 00:07:57.277 "reset": true, 00:07:57.277 "nvme_admin": false, 00:07:57.277 "nvme_io": false, 00:07:57.277 "nvme_io_md": false, 00:07:57.277 "write_zeroes": true, 00:07:57.277 "zcopy": true, 00:07:57.277 "get_zone_info": false, 00:07:57.277 "zone_management": false, 00:07:57.277 "zone_append": false, 00:07:57.277 "compare": false, 00:07:57.277 "compare_and_write": false, 00:07:57.277 "abort": true, 00:07:57.277 "seek_hole": false, 00:07:57.277 "seek_data": false, 00:07:57.277 "copy": true, 00:07:57.277 "nvme_iov_md": false 00:07:57.277 }, 00:07:57.277 "memory_domains": [ 00:07:57.277 { 00:07:57.277 "dma_device_id": "system", 00:07:57.277 "dma_device_type": 1 00:07:57.277 }, 00:07:57.277 { 00:07:57.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.277 "dma_device_type": 2 00:07:57.277 } 00:07:57.277 ], 00:07:57.277 "driver_specific": {} 00:07:57.277 } 00:07:57.277 ] 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.277 "name": "Existed_Raid", 00:07:57.277 "uuid": "202feeb8-7a67-43bd-9619-8b3678eddb6e", 00:07:57.277 "strip_size_kb": 0, 00:07:57.277 "state": "configuring", 00:07:57.277 "raid_level": "raid1", 00:07:57.277 "superblock": true, 00:07:57.277 "num_base_bdevs": 2, 00:07:57.277 "num_base_bdevs_discovered": 1, 00:07:57.277 "num_base_bdevs_operational": 2, 00:07:57.277 "base_bdevs_list": [ 00:07:57.277 { 00:07:57.277 "name": "BaseBdev1", 00:07:57.277 "uuid": "e07f2612-4016-478c-b407-82fe13cb9b01", 00:07:57.277 "is_configured": true, 00:07:57.277 "data_offset": 2048, 00:07:57.277 "data_size": 63488 00:07:57.277 }, 00:07:57.277 { 00:07:57.277 "name": "BaseBdev2", 00:07:57.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.277 "is_configured": false, 00:07:57.277 "data_offset": 0, 00:07:57.277 "data_size": 0 00:07:57.277 } 00:07:57.277 ] 00:07:57.277 }' 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.277 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.537 [2024-09-28 08:45:35.470800] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.537 [2024-09-28 08:45:35.470903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.537 [2024-09-28 08:45:35.482818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.537 [2024-09-28 08:45:35.484861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.537 [2024-09-28 08:45:35.484910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.537 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.537 "name": "Existed_Raid", 00:07:57.537 "uuid": "be0b9a9e-10ea-4b66-a835-0baccd960daa", 00:07:57.537 "strip_size_kb": 0, 00:07:57.537 "state": "configuring", 00:07:57.537 "raid_level": "raid1", 00:07:57.537 "superblock": true, 00:07:57.537 "num_base_bdevs": 2, 00:07:57.537 "num_base_bdevs_discovered": 1, 00:07:57.537 "num_base_bdevs_operational": 2, 00:07:57.537 "base_bdevs_list": [ 00:07:57.537 { 00:07:57.537 "name": "BaseBdev1", 00:07:57.538 "uuid": "e07f2612-4016-478c-b407-82fe13cb9b01", 00:07:57.538 "is_configured": true, 00:07:57.538 "data_offset": 2048, 00:07:57.538 "data_size": 63488 00:07:57.538 }, 00:07:57.538 { 00:07:57.538 "name": "BaseBdev2", 00:07:57.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.538 "is_configured": false, 00:07:57.538 "data_offset": 0, 00:07:57.538 "data_size": 0 00:07:57.538 } 00:07:57.538 ] 00:07:57.538 }' 00:07:57.538 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.538 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.105 [2024-09-28 08:45:35.988272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.105 [2024-09-28 08:45:35.988563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.105 [2024-09-28 08:45:35.988583] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.105 [2024-09-28 08:45:35.988914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.105 [2024-09-28 08:45:35.989088] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.105 [2024-09-28 08:45:35.989106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.105 BaseBdev2 00:07:58.105 [2024-09-28 08:45:35.989261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.105 08:45:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.105 [ 00:07:58.105 { 00:07:58.105 "name": "BaseBdev2", 00:07:58.105 "aliases": [ 00:07:58.105 "37f10a06-23f9-4464-9f81-70ea739609e2" 00:07:58.105 ], 00:07:58.105 "product_name": "Malloc disk", 00:07:58.105 "block_size": 512, 00:07:58.105 "num_blocks": 65536, 00:07:58.105 "uuid": "37f10a06-23f9-4464-9f81-70ea739609e2", 00:07:58.105 "assigned_rate_limits": { 00:07:58.105 "rw_ios_per_sec": 0, 00:07:58.105 "rw_mbytes_per_sec": 0, 00:07:58.105 "r_mbytes_per_sec": 0, 00:07:58.105 "w_mbytes_per_sec": 0 00:07:58.105 }, 00:07:58.105 "claimed": true, 00:07:58.105 "claim_type": "exclusive_write", 00:07:58.105 "zoned": false, 00:07:58.105 "supported_io_types": { 00:07:58.105 "read": true, 00:07:58.105 "write": true, 00:07:58.105 "unmap": true, 00:07:58.105 "flush": true, 00:07:58.105 "reset": true, 00:07:58.105 "nvme_admin": false, 00:07:58.105 "nvme_io": false, 00:07:58.105 "nvme_io_md": false, 00:07:58.105 "write_zeroes": true, 00:07:58.105 "zcopy": true, 00:07:58.105 "get_zone_info": false, 00:07:58.105 "zone_management": false, 00:07:58.105 "zone_append": false, 00:07:58.105 "compare": false, 00:07:58.105 "compare_and_write": false, 00:07:58.105 "abort": true, 00:07:58.105 "seek_hole": false, 00:07:58.105 "seek_data": false, 00:07:58.105 "copy": true, 00:07:58.105 "nvme_iov_md": false 00:07:58.105 }, 00:07:58.105 "memory_domains": [ 00:07:58.105 { 00:07:58.105 "dma_device_id": "system", 00:07:58.105 "dma_device_type": 1 00:07:58.105 }, 00:07:58.105 { 00:07:58.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.105 "dma_device_type": 2 00:07:58.105 } 00:07:58.105 ], 00:07:58.105 "driver_specific": {} 00:07:58.105 } 00:07:58.105 ] 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.105 "name": "Existed_Raid", 00:07:58.105 "uuid": "be0b9a9e-10ea-4b66-a835-0baccd960daa", 00:07:58.105 "strip_size_kb": 0, 00:07:58.105 "state": "online", 00:07:58.105 "raid_level": "raid1", 00:07:58.105 "superblock": true, 00:07:58.105 "num_base_bdevs": 2, 00:07:58.105 "num_base_bdevs_discovered": 2, 00:07:58.105 "num_base_bdevs_operational": 2, 00:07:58.105 "base_bdevs_list": [ 00:07:58.105 { 00:07:58.105 "name": "BaseBdev1", 00:07:58.105 "uuid": "e07f2612-4016-478c-b407-82fe13cb9b01", 00:07:58.105 "is_configured": true, 00:07:58.105 "data_offset": 2048, 00:07:58.105 "data_size": 63488 00:07:58.105 }, 00:07:58.105 { 00:07:58.105 "name": "BaseBdev2", 00:07:58.105 "uuid": "37f10a06-23f9-4464-9f81-70ea739609e2", 00:07:58.105 "is_configured": true, 00:07:58.105 "data_offset": 2048, 00:07:58.105 "data_size": 63488 00:07:58.105 } 00:07:58.105 ] 00:07:58.105 }' 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.105 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.673 [2024-09-28 08:45:36.471748] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.673 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.673 "name": "Existed_Raid", 00:07:58.673 "aliases": [ 00:07:58.673 "be0b9a9e-10ea-4b66-a835-0baccd960daa" 00:07:58.673 ], 00:07:58.673 "product_name": "Raid Volume", 00:07:58.673 "block_size": 512, 00:07:58.673 "num_blocks": 63488, 00:07:58.673 "uuid": "be0b9a9e-10ea-4b66-a835-0baccd960daa", 00:07:58.673 "assigned_rate_limits": { 00:07:58.673 "rw_ios_per_sec": 0, 00:07:58.673 "rw_mbytes_per_sec": 0, 00:07:58.673 "r_mbytes_per_sec": 0, 00:07:58.673 "w_mbytes_per_sec": 0 00:07:58.673 }, 00:07:58.673 "claimed": false, 00:07:58.673 "zoned": false, 00:07:58.673 "supported_io_types": { 00:07:58.673 "read": true, 00:07:58.673 "write": true, 00:07:58.673 "unmap": false, 00:07:58.673 "flush": false, 00:07:58.673 "reset": true, 00:07:58.673 "nvme_admin": false, 00:07:58.673 "nvme_io": false, 00:07:58.673 "nvme_io_md": false, 00:07:58.673 "write_zeroes": true, 00:07:58.673 "zcopy": false, 00:07:58.673 "get_zone_info": false, 00:07:58.673 "zone_management": false, 00:07:58.673 "zone_append": false, 00:07:58.673 "compare": false, 00:07:58.673 "compare_and_write": false, 00:07:58.673 "abort": false, 00:07:58.673 "seek_hole": false, 00:07:58.674 "seek_data": false, 00:07:58.674 "copy": false, 00:07:58.674 "nvme_iov_md": false 00:07:58.674 }, 00:07:58.674 "memory_domains": [ 00:07:58.674 { 00:07:58.674 "dma_device_id": "system", 00:07:58.674 "dma_device_type": 1 00:07:58.674 }, 00:07:58.674 { 00:07:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.674 "dma_device_type": 2 00:07:58.674 }, 00:07:58.674 { 00:07:58.674 "dma_device_id": "system", 00:07:58.674 "dma_device_type": 1 00:07:58.674 }, 00:07:58.674 { 00:07:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.674 "dma_device_type": 2 00:07:58.674 } 00:07:58.674 ], 00:07:58.674 "driver_specific": { 00:07:58.674 "raid": { 00:07:58.674 "uuid": "be0b9a9e-10ea-4b66-a835-0baccd960daa", 00:07:58.674 "strip_size_kb": 0, 00:07:58.674 "state": "online", 00:07:58.674 "raid_level": "raid1", 00:07:58.674 "superblock": true, 00:07:58.674 "num_base_bdevs": 2, 00:07:58.674 "num_base_bdevs_discovered": 2, 00:07:58.674 "num_base_bdevs_operational": 2, 00:07:58.674 "base_bdevs_list": [ 00:07:58.674 { 00:07:58.674 "name": "BaseBdev1", 00:07:58.674 "uuid": "e07f2612-4016-478c-b407-82fe13cb9b01", 00:07:58.674 "is_configured": true, 00:07:58.674 "data_offset": 2048, 00:07:58.674 "data_size": 63488 00:07:58.674 }, 00:07:58.674 { 00:07:58.674 "name": "BaseBdev2", 00:07:58.674 "uuid": "37f10a06-23f9-4464-9f81-70ea739609e2", 00:07:58.674 "is_configured": true, 00:07:58.674 "data_offset": 2048, 00:07:58.674 "data_size": 63488 00:07:58.674 } 00:07:58.674 ] 00:07:58.674 } 00:07:58.674 } 00:07:58.674 }' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.674 BaseBdev2' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.674 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.934 [2024-09-28 08:45:36.695200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.934 "name": "Existed_Raid", 00:07:58.934 "uuid": "be0b9a9e-10ea-4b66-a835-0baccd960daa", 00:07:58.934 "strip_size_kb": 0, 00:07:58.934 "state": "online", 00:07:58.934 "raid_level": "raid1", 00:07:58.934 "superblock": true, 00:07:58.934 "num_base_bdevs": 2, 00:07:58.934 "num_base_bdevs_discovered": 1, 00:07:58.934 "num_base_bdevs_operational": 1, 00:07:58.934 "base_bdevs_list": [ 00:07:58.934 { 00:07:58.934 "name": null, 00:07:58.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.934 "is_configured": false, 00:07:58.934 "data_offset": 0, 00:07:58.934 "data_size": 63488 00:07:58.934 }, 00:07:58.934 { 00:07:58.934 "name": "BaseBdev2", 00:07:58.934 "uuid": "37f10a06-23f9-4464-9f81-70ea739609e2", 00:07:58.934 "is_configured": true, 00:07:58.934 "data_offset": 2048, 00:07:58.934 "data_size": 63488 00:07:58.934 } 00:07:58.934 ] 00:07:58.934 }' 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.934 08:45:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.504 [2024-09-28 08:45:37.280335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.504 [2024-09-28 08:45:37.280501] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.504 [2024-09-28 08:45:37.381578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.504 [2024-09-28 08:45:37.381767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.504 [2024-09-28 08:45:37.381815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62947 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62947 ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62947 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62947 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.504 killing process with pid 62947 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62947' 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62947 00:07:59.504 [2024-09-28 08:45:37.472438] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.504 08:45:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62947 00:07:59.504 [2024-09-28 08:45:37.490543] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.884 08:45:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.884 00:08:00.884 real 0m5.307s 00:08:00.884 user 0m7.420s 00:08:00.884 sys 0m0.927s 00:08:00.884 08:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.884 08:45:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.884 ************************************ 00:08:00.884 END TEST raid_state_function_test_sb 00:08:00.884 ************************************ 00:08:00.884 08:45:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:00.884 08:45:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:00.884 08:45:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.884 08:45:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.144 ************************************ 00:08:01.144 START TEST raid_superblock_test 00:08:01.144 ************************************ 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63205 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63205 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63205 ']' 00:08:01.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.144 08:45:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.144 [2024-09-28 08:45:38.986235] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:01.144 [2024-09-28 08:45:38.986370] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:08:01.405 [2024-09-28 08:45:39.151589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.664 [2024-09-28 08:45:39.402277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.664 [2024-09-28 08:45:39.617546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.664 [2024-09-28 08:45:39.617613] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.923 malloc1 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.923 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.924 [2024-09-28 08:45:39.887152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.924 [2024-09-28 08:45:39.887288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.924 [2024-09-28 08:45:39.887336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.924 [2024-09-28 08:45:39.887420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.924 [2024-09-28 08:45:39.889867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.924 [2024-09-28 08:45:39.889936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.924 pt1 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.924 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.183 malloc2 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.183 [2024-09-28 08:45:39.968969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.183 [2024-09-28 08:45:39.969068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.183 [2024-09-28 08:45:39.969117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:02.183 [2024-09-28 08:45:39.969147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.183 [2024-09-28 08:45:39.971520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.183 [2024-09-28 08:45:39.971595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.183 pt2 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.183 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.183 [2024-09-28 08:45:39.981017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.183 [2024-09-28 08:45:39.983078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.183 [2024-09-28 08:45:39.983257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:02.183 [2024-09-28 08:45:39.983270] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.184 [2024-09-28 08:45:39.983502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.184 [2024-09-28 08:45:39.983676] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:02.184 [2024-09-28 08:45:39.983691] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:02.184 [2024-09-28 08:45:39.983836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.184 08:45:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.184 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.184 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.184 "name": "raid_bdev1", 00:08:02.184 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:02.184 "strip_size_kb": 0, 00:08:02.184 "state": "online", 00:08:02.184 "raid_level": "raid1", 00:08:02.184 "superblock": true, 00:08:02.184 "num_base_bdevs": 2, 00:08:02.184 "num_base_bdevs_discovered": 2, 00:08:02.184 "num_base_bdevs_operational": 2, 00:08:02.184 "base_bdevs_list": [ 00:08:02.184 { 00:08:02.184 "name": "pt1", 00:08:02.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.184 "is_configured": true, 00:08:02.184 "data_offset": 2048, 00:08:02.184 "data_size": 63488 00:08:02.184 }, 00:08:02.184 { 00:08:02.184 "name": "pt2", 00:08:02.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.184 "is_configured": true, 00:08:02.184 "data_offset": 2048, 00:08:02.184 "data_size": 63488 00:08:02.184 } 00:08:02.184 ] 00:08:02.184 }' 00:08:02.184 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.184 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.443 [2024-09-28 08:45:40.408508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.443 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.703 "name": "raid_bdev1", 00:08:02.703 "aliases": [ 00:08:02.703 "4b534a23-5758-4f69-b861-38b1a03cfe65" 00:08:02.703 ], 00:08:02.703 "product_name": "Raid Volume", 00:08:02.703 "block_size": 512, 00:08:02.703 "num_blocks": 63488, 00:08:02.703 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:02.703 "assigned_rate_limits": { 00:08:02.703 "rw_ios_per_sec": 0, 00:08:02.703 "rw_mbytes_per_sec": 0, 00:08:02.703 "r_mbytes_per_sec": 0, 00:08:02.703 "w_mbytes_per_sec": 0 00:08:02.703 }, 00:08:02.703 "claimed": false, 00:08:02.703 "zoned": false, 00:08:02.703 "supported_io_types": { 00:08:02.703 "read": true, 00:08:02.703 "write": true, 00:08:02.703 "unmap": false, 00:08:02.703 "flush": false, 00:08:02.703 "reset": true, 00:08:02.703 "nvme_admin": false, 00:08:02.703 "nvme_io": false, 00:08:02.703 "nvme_io_md": false, 00:08:02.703 "write_zeroes": true, 00:08:02.703 "zcopy": false, 00:08:02.703 "get_zone_info": false, 00:08:02.703 "zone_management": false, 00:08:02.703 "zone_append": false, 00:08:02.703 "compare": false, 00:08:02.703 "compare_and_write": false, 00:08:02.703 "abort": false, 00:08:02.703 "seek_hole": false, 00:08:02.703 "seek_data": false, 00:08:02.703 "copy": false, 00:08:02.703 "nvme_iov_md": false 00:08:02.703 }, 00:08:02.703 "memory_domains": [ 00:08:02.703 { 00:08:02.703 "dma_device_id": "system", 00:08:02.703 "dma_device_type": 1 00:08:02.703 }, 00:08:02.703 { 00:08:02.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.703 "dma_device_type": 2 00:08:02.703 }, 00:08:02.703 { 00:08:02.703 "dma_device_id": "system", 00:08:02.703 "dma_device_type": 1 00:08:02.703 }, 00:08:02.703 { 00:08:02.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.703 "dma_device_type": 2 00:08:02.703 } 00:08:02.703 ], 00:08:02.703 "driver_specific": { 00:08:02.703 "raid": { 00:08:02.703 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:02.703 "strip_size_kb": 0, 00:08:02.703 "state": "online", 00:08:02.703 "raid_level": "raid1", 00:08:02.703 "superblock": true, 00:08:02.703 "num_base_bdevs": 2, 00:08:02.703 "num_base_bdevs_discovered": 2, 00:08:02.703 "num_base_bdevs_operational": 2, 00:08:02.703 "base_bdevs_list": [ 00:08:02.703 { 00:08:02.703 "name": "pt1", 00:08:02.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.703 "is_configured": true, 00:08:02.703 "data_offset": 2048, 00:08:02.703 "data_size": 63488 00:08:02.703 }, 00:08:02.703 { 00:08:02.703 "name": "pt2", 00:08:02.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.703 "is_configured": true, 00:08:02.703 "data_offset": 2048, 00:08:02.703 "data_size": 63488 00:08:02.703 } 00:08:02.703 ] 00:08:02.703 } 00:08:02.703 } 00:08:02.703 }' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:02.703 pt2' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.703 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 [2024-09-28 08:45:40.628066] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4b534a23-5758-4f69-b861-38b1a03cfe65 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4b534a23-5758-4f69-b861-38b1a03cfe65 ']' 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 [2024-09-28 08:45:40.675755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.704 [2024-09-28 08:45:40.675815] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.704 [2024-09-28 08:45:40.675918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.704 [2024-09-28 08:45:40.676007] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.704 [2024-09-28 08:45:40.676053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.704 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.963 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 [2024-09-28 08:45:40.819522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.964 [2024-09-28 08:45:40.821629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.964 [2024-09-28 08:45:40.821709] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.964 [2024-09-28 08:45:40.821758] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.964 [2024-09-28 08:45:40.821772] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.964 [2024-09-28 08:45:40.821782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:02.964 request: 00:08:02.964 { 00:08:02.964 "name": "raid_bdev1", 00:08:02.964 "raid_level": "raid1", 00:08:02.964 "base_bdevs": [ 00:08:02.964 "malloc1", 00:08:02.964 "malloc2" 00:08:02.964 ], 00:08:02.964 "superblock": false, 00:08:02.964 "method": "bdev_raid_create", 00:08:02.964 "req_id": 1 00:08:02.964 } 00:08:02.964 Got JSON-RPC error response 00:08:02.964 response: 00:08:02.964 { 00:08:02.964 "code": -17, 00:08:02.964 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.964 } 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 [2024-09-28 08:45:40.883375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.964 [2024-09-28 08:45:40.883463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.964 [2024-09-28 08:45:40.883494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.964 [2024-09-28 08:45:40.883524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.964 [2024-09-28 08:45:40.885985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.964 [2024-09-28 08:45:40.886056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.964 [2024-09-28 08:45:40.886153] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.964 [2024-09-28 08:45:40.886236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.964 pt1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.964 "name": "raid_bdev1", 00:08:02.964 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:02.964 "strip_size_kb": 0, 00:08:02.964 "state": "configuring", 00:08:02.964 "raid_level": "raid1", 00:08:02.964 "superblock": true, 00:08:02.964 "num_base_bdevs": 2, 00:08:02.964 "num_base_bdevs_discovered": 1, 00:08:02.964 "num_base_bdevs_operational": 2, 00:08:02.964 "base_bdevs_list": [ 00:08:02.964 { 00:08:02.964 "name": "pt1", 00:08:02.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.964 "is_configured": true, 00:08:02.964 "data_offset": 2048, 00:08:02.964 "data_size": 63488 00:08:02.964 }, 00:08:02.964 { 00:08:02.964 "name": null, 00:08:02.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.964 "is_configured": false, 00:08:02.964 "data_offset": 2048, 00:08:02.964 "data_size": 63488 00:08:02.964 } 00:08:02.964 ] 00:08:02.964 }' 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.964 08:45:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.532 [2024-09-28 08:45:41.318739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.532 [2024-09-28 08:45:41.318868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.532 [2024-09-28 08:45:41.318895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.532 [2024-09-28 08:45:41.318907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.532 [2024-09-28 08:45:41.319457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.532 [2024-09-28 08:45:41.319487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.532 [2024-09-28 08:45:41.319580] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.532 [2024-09-28 08:45:41.319606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.532 [2024-09-28 08:45:41.319762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.532 [2024-09-28 08:45:41.319780] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:03.532 [2024-09-28 08:45:41.320045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:03.532 [2024-09-28 08:45:41.320221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.532 [2024-09-28 08:45:41.320232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.532 [2024-09-28 08:45:41.320385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.532 pt2 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.532 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.533 "name": "raid_bdev1", 00:08:03.533 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:03.533 "strip_size_kb": 0, 00:08:03.533 "state": "online", 00:08:03.533 "raid_level": "raid1", 00:08:03.533 "superblock": true, 00:08:03.533 "num_base_bdevs": 2, 00:08:03.533 "num_base_bdevs_discovered": 2, 00:08:03.533 "num_base_bdevs_operational": 2, 00:08:03.533 "base_bdevs_list": [ 00:08:03.533 { 00:08:03.533 "name": "pt1", 00:08:03.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.533 "is_configured": true, 00:08:03.533 "data_offset": 2048, 00:08:03.533 "data_size": 63488 00:08:03.533 }, 00:08:03.533 { 00:08:03.533 "name": "pt2", 00:08:03.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.533 "is_configured": true, 00:08:03.533 "data_offset": 2048, 00:08:03.533 "data_size": 63488 00:08:03.533 } 00:08:03.533 ] 00:08:03.533 }' 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.533 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.790 [2024-09-28 08:45:41.746207] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.790 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.049 "name": "raid_bdev1", 00:08:04.049 "aliases": [ 00:08:04.049 "4b534a23-5758-4f69-b861-38b1a03cfe65" 00:08:04.049 ], 00:08:04.049 "product_name": "Raid Volume", 00:08:04.049 "block_size": 512, 00:08:04.049 "num_blocks": 63488, 00:08:04.049 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:04.049 "assigned_rate_limits": { 00:08:04.049 "rw_ios_per_sec": 0, 00:08:04.049 "rw_mbytes_per_sec": 0, 00:08:04.049 "r_mbytes_per_sec": 0, 00:08:04.049 "w_mbytes_per_sec": 0 00:08:04.049 }, 00:08:04.049 "claimed": false, 00:08:04.049 "zoned": false, 00:08:04.049 "supported_io_types": { 00:08:04.049 "read": true, 00:08:04.049 "write": true, 00:08:04.049 "unmap": false, 00:08:04.049 "flush": false, 00:08:04.049 "reset": true, 00:08:04.049 "nvme_admin": false, 00:08:04.049 "nvme_io": false, 00:08:04.049 "nvme_io_md": false, 00:08:04.049 "write_zeroes": true, 00:08:04.049 "zcopy": false, 00:08:04.049 "get_zone_info": false, 00:08:04.049 "zone_management": false, 00:08:04.049 "zone_append": false, 00:08:04.049 "compare": false, 00:08:04.049 "compare_and_write": false, 00:08:04.049 "abort": false, 00:08:04.049 "seek_hole": false, 00:08:04.049 "seek_data": false, 00:08:04.049 "copy": false, 00:08:04.049 "nvme_iov_md": false 00:08:04.049 }, 00:08:04.049 "memory_domains": [ 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 } 00:08:04.049 ], 00:08:04.049 "driver_specific": { 00:08:04.049 "raid": { 00:08:04.049 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:04.049 "strip_size_kb": 0, 00:08:04.049 "state": "online", 00:08:04.049 "raid_level": "raid1", 00:08:04.049 "superblock": true, 00:08:04.049 "num_base_bdevs": 2, 00:08:04.049 "num_base_bdevs_discovered": 2, 00:08:04.049 "num_base_bdevs_operational": 2, 00:08:04.049 "base_bdevs_list": [ 00:08:04.049 { 00:08:04.049 "name": "pt1", 00:08:04.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.049 "is_configured": true, 00:08:04.049 "data_offset": 2048, 00:08:04.049 "data_size": 63488 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "name": "pt2", 00:08:04.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.049 "is_configured": true, 00:08:04.049 "data_offset": 2048, 00:08:04.049 "data_size": 63488 00:08:04.049 } 00:08:04.049 ] 00:08:04.049 } 00:08:04.049 } 00:08:04.049 }' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.049 pt2' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 [2024-09-28 08:45:41.949871] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4b534a23-5758-4f69-b861-38b1a03cfe65 '!=' 4b534a23-5758-4f69-b861-38b1a03cfe65 ']' 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 [2024-09-28 08:45:41.989599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.049 08:45:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.049 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.049 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.308 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.308 "name": "raid_bdev1", 00:08:04.308 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:04.308 "strip_size_kb": 0, 00:08:04.308 "state": "online", 00:08:04.308 "raid_level": "raid1", 00:08:04.308 "superblock": true, 00:08:04.308 "num_base_bdevs": 2, 00:08:04.308 "num_base_bdevs_discovered": 1, 00:08:04.308 "num_base_bdevs_operational": 1, 00:08:04.308 "base_bdevs_list": [ 00:08:04.308 { 00:08:04.308 "name": null, 00:08:04.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.308 "is_configured": false, 00:08:04.308 "data_offset": 0, 00:08:04.308 "data_size": 63488 00:08:04.308 }, 00:08:04.308 { 00:08:04.308 "name": "pt2", 00:08:04.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.308 "is_configured": true, 00:08:04.308 "data_offset": 2048, 00:08:04.308 "data_size": 63488 00:08:04.308 } 00:08:04.308 ] 00:08:04.308 }' 00:08:04.308 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.308 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 [2024-09-28 08:45:42.412850] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.567 [2024-09-28 08:45:42.412924] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.567 [2024-09-28 08:45:42.413029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.567 [2024-09-28 08:45:42.413098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.567 [2024-09-28 08:45:42.413154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 [2024-09-28 08:45:42.484754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.567 [2024-09-28 08:45:42.484807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.567 [2024-09-28 08:45:42.484826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:04.567 [2024-09-28 08:45:42.484837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.567 [2024-09-28 08:45:42.487259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.567 [2024-09-28 08:45:42.487334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.567 [2024-09-28 08:45:42.487429] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.567 [2024-09-28 08:45:42.487484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.567 [2024-09-28 08:45:42.487595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:04.567 [2024-09-28 08:45:42.487607] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.567 [2024-09-28 08:45:42.487868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:04.567 [2024-09-28 08:45:42.488027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:04.567 [2024-09-28 08:45:42.488038] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:04.567 [2024-09-28 08:45:42.488180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.567 pt2 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.567 "name": "raid_bdev1", 00:08:04.567 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:04.567 "strip_size_kb": 0, 00:08:04.567 "state": "online", 00:08:04.567 "raid_level": "raid1", 00:08:04.567 "superblock": true, 00:08:04.567 "num_base_bdevs": 2, 00:08:04.567 "num_base_bdevs_discovered": 1, 00:08:04.567 "num_base_bdevs_operational": 1, 00:08:04.567 "base_bdevs_list": [ 00:08:04.567 { 00:08:04.567 "name": null, 00:08:04.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.567 "is_configured": false, 00:08:04.567 "data_offset": 2048, 00:08:04.567 "data_size": 63488 00:08:04.567 }, 00:08:04.567 { 00:08:04.567 "name": "pt2", 00:08:04.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.567 "is_configured": true, 00:08:04.567 "data_offset": 2048, 00:08:04.567 "data_size": 63488 00:08:04.567 } 00:08:04.567 ] 00:08:04.567 }' 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.567 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.135 [2024-09-28 08:45:42.959874] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.135 [2024-09-28 08:45:42.959948] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.135 [2024-09-28 08:45:42.960035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.135 [2024-09-28 08:45:42.960101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.135 [2024-09-28 08:45:42.960192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.135 08:45:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.135 [2024-09-28 08:45:43.015796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.135 [2024-09-28 08:45:43.015886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.135 [2024-09-28 08:45:43.015922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:05.135 [2024-09-28 08:45:43.015951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.135 [2024-09-28 08:45:43.018413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.135 [2024-09-28 08:45:43.018479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.135 [2024-09-28 08:45:43.018592] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:05.135 [2024-09-28 08:45:43.018675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.135 [2024-09-28 08:45:43.018849] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:05.135 [2024-09-28 08:45:43.018898] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.135 [2024-09-28 08:45:43.018970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:05.135 [2024-09-28 08:45:43.019066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.135 [2024-09-28 08:45:43.019190] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:05.135 [2024-09-28 08:45:43.019228] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.135 [2024-09-28 08:45:43.019481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:05.135 [2024-09-28 08:45:43.019694] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:05.135 [2024-09-28 08:45:43.019740] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:05.135 [2024-09-28 08:45:43.019966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.135 pt1 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.135 "name": "raid_bdev1", 00:08:05.135 "uuid": "4b534a23-5758-4f69-b861-38b1a03cfe65", 00:08:05.135 "strip_size_kb": 0, 00:08:05.135 "state": "online", 00:08:05.135 "raid_level": "raid1", 00:08:05.135 "superblock": true, 00:08:05.135 "num_base_bdevs": 2, 00:08:05.135 "num_base_bdevs_discovered": 1, 00:08:05.135 "num_base_bdevs_operational": 1, 00:08:05.135 "base_bdevs_list": [ 00:08:05.135 { 00:08:05.135 "name": null, 00:08:05.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.135 "is_configured": false, 00:08:05.135 "data_offset": 2048, 00:08:05.135 "data_size": 63488 00:08:05.135 }, 00:08:05.135 { 00:08:05.135 "name": "pt2", 00:08:05.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.135 "is_configured": true, 00:08:05.135 "data_offset": 2048, 00:08:05.135 "data_size": 63488 00:08:05.135 } 00:08:05.135 ] 00:08:05.135 }' 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.135 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.702 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.703 [2024-09-28 08:45:43.467429] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4b534a23-5758-4f69-b861-38b1a03cfe65 '!=' 4b534a23-5758-4f69-b861-38b1a03cfe65 ']' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63205 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63205 ']' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63205 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63205 00:08:05.703 killing process with pid 63205 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63205' 00:08:05.703 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63205 00:08:05.703 [2024-09-28 08:45:43.553359] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.703 [2024-09-28 08:45:43.553453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.703 [2024-09-28 08:45:43.553504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.703 [2024-09-28 08:45:43.553523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 08:45:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63205 00:08:05.703 te offline 00:08:05.961 [2024-09-28 08:45:43.765088] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.341 08:45:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:07.341 00:08:07.341 real 0m6.209s 00:08:07.341 user 0m9.148s 00:08:07.341 sys 0m1.122s 00:08:07.341 08:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.341 ************************************ 00:08:07.341 END TEST raid_superblock_test 00:08:07.341 ************************************ 00:08:07.341 08:45:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.341 08:45:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:07.341 08:45:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:07.341 08:45:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.341 08:45:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.341 ************************************ 00:08:07.341 START TEST raid_read_error_test 00:08:07.341 ************************************ 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.341 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BD8V1eS5Dk 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63531 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63531 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63531 ']' 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.342 08:45:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.342 [2024-09-28 08:45:45.278343] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:07.342 [2024-09-28 08:45:45.279059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:08:07.601 [2024-09-28 08:45:45.470113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.859 [2024-09-28 08:45:45.705859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.117 [2024-09-28 08:45:45.934962] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.117 [2024-09-28 08:45:45.935085] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.117 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 BaseBdev1_malloc 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 true 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 [2024-09-28 08:45:46.167364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:08.376 [2024-09-28 08:45:46.167422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.376 [2024-09-28 08:45:46.167447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:08.376 [2024-09-28 08:45:46.167459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.376 [2024-09-28 08:45:46.169809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.376 [2024-09-28 08:45:46.169885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:08.376 BaseBdev1 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 BaseBdev2_malloc 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 true 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 [2024-09-28 08:45:46.266237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:08.376 [2024-09-28 08:45:46.266291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.376 [2024-09-28 08:45:46.266324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:08.376 [2024-09-28 08:45:46.266335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.376 [2024-09-28 08:45:46.268685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.376 [2024-09-28 08:45:46.268719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:08.376 BaseBdev2 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 [2024-09-28 08:45:46.278293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.376 [2024-09-28 08:45:46.280362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.376 [2024-09-28 08:45:46.280576] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.376 [2024-09-28 08:45:46.280590] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.376 [2024-09-28 08:45:46.280821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.376 [2024-09-28 08:45:46.280988] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.376 [2024-09-28 08:45:46.280999] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:08.376 [2024-09-28 08:45:46.281131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.376 "name": "raid_bdev1", 00:08:08.376 "uuid": "b0f6d4cf-048b-4a51-9763-9b8c69f7e55e", 00:08:08.376 "strip_size_kb": 0, 00:08:08.376 "state": "online", 00:08:08.376 "raid_level": "raid1", 00:08:08.376 "superblock": true, 00:08:08.376 "num_base_bdevs": 2, 00:08:08.376 "num_base_bdevs_discovered": 2, 00:08:08.376 "num_base_bdevs_operational": 2, 00:08:08.376 "base_bdevs_list": [ 00:08:08.376 { 00:08:08.376 "name": "BaseBdev1", 00:08:08.376 "uuid": "3130aa2a-5485-573f-b52a-55709bb749a4", 00:08:08.376 "is_configured": true, 00:08:08.376 "data_offset": 2048, 00:08:08.376 "data_size": 63488 00:08:08.376 }, 00:08:08.376 { 00:08:08.376 "name": "BaseBdev2", 00:08:08.376 "uuid": "61b3ec27-d2bc-57f1-98bb-7d25b0281fac", 00:08:08.376 "is_configured": true, 00:08:08.376 "data_offset": 2048, 00:08:08.376 "data_size": 63488 00:08:08.376 } 00:08:08.376 ] 00:08:08.376 }' 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.376 08:45:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.945 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:08.945 08:45:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:08.945 [2024-09-28 08:45:46.838817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.882 "name": "raid_bdev1", 00:08:09.882 "uuid": "b0f6d4cf-048b-4a51-9763-9b8c69f7e55e", 00:08:09.882 "strip_size_kb": 0, 00:08:09.882 "state": "online", 00:08:09.882 "raid_level": "raid1", 00:08:09.882 "superblock": true, 00:08:09.882 "num_base_bdevs": 2, 00:08:09.882 "num_base_bdevs_discovered": 2, 00:08:09.882 "num_base_bdevs_operational": 2, 00:08:09.882 "base_bdevs_list": [ 00:08:09.882 { 00:08:09.882 "name": "BaseBdev1", 00:08:09.882 "uuid": "3130aa2a-5485-573f-b52a-55709bb749a4", 00:08:09.882 "is_configured": true, 00:08:09.882 "data_offset": 2048, 00:08:09.882 "data_size": 63488 00:08:09.882 }, 00:08:09.882 { 00:08:09.882 "name": "BaseBdev2", 00:08:09.882 "uuid": "61b3ec27-d2bc-57f1-98bb-7d25b0281fac", 00:08:09.882 "is_configured": true, 00:08:09.882 "data_offset": 2048, 00:08:09.882 "data_size": 63488 00:08:09.882 } 00:08:09.882 ] 00:08:09.882 }' 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.882 08:45:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.451 [2024-09-28 08:45:48.235524] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.451 [2024-09-28 08:45:48.235567] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.451 [2024-09-28 08:45:48.238342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.451 [2024-09-28 08:45:48.238429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.451 [2024-09-28 08:45:48.238536] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.451 [2024-09-28 08:45:48.238604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.451 { 00:08:10.451 "results": [ 00:08:10.451 { 00:08:10.451 "job": "raid_bdev1", 00:08:10.451 "core_mask": "0x1", 00:08:10.451 "workload": "randrw", 00:08:10.451 "percentage": 50, 00:08:10.451 "status": "finished", 00:08:10.451 "queue_depth": 1, 00:08:10.451 "io_size": 131072, 00:08:10.451 "runtime": 1.397265, 00:08:10.451 "iops": 14675.813106318416, 00:08:10.451 "mibps": 1834.476638289802, 00:08:10.451 "io_failed": 0, 00:08:10.451 "io_timeout": 0, 00:08:10.451 "avg_latency_us": 65.6660235772936, 00:08:10.451 "min_latency_us": 21.910917030567685, 00:08:10.451 "max_latency_us": 1502.46288209607 00:08:10.451 } 00:08:10.451 ], 00:08:10.451 "core_count": 1 00:08:10.451 } 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63531 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63531 ']' 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63531 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63531 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.451 killing process with pid 63531 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63531' 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63531 00:08:10.451 [2024-09-28 08:45:48.283839] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.451 08:45:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63531 00:08:10.451 [2024-09-28 08:45:48.427287] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BD8V1eS5Dk 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:11.878 ************************************ 00:08:11.878 END TEST raid_read_error_test 00:08:11.878 ************************************ 00:08:11.878 00:08:11.878 real 0m4.676s 00:08:11.878 user 0m5.392s 00:08:11.878 sys 0m0.720s 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.878 08:45:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.137 08:45:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:12.137 08:45:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.137 08:45:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.137 08:45:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.137 ************************************ 00:08:12.137 START TEST raid_write_error_test 00:08:12.137 ************************************ 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:12.137 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CbzEML5naY 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63676 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63676 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63676 ']' 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.138 08:45:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.138 [2024-09-28 08:45:50.024720] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:12.138 [2024-09-28 08:45:50.024927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63676 ] 00:08:12.397 [2024-09-28 08:45:50.192327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.655 [2024-09-28 08:45:50.446865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.914 [2024-09-28 08:45:50.679810] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.914 [2024-09-28 08:45:50.679854] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.914 BaseBdev1_malloc 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.914 true 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.914 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.914 [2024-09-28 08:45:50.907343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.914 [2024-09-28 08:45:50.907461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.914 [2024-09-28 08:45:50.907484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:12.914 [2024-09-28 08:45:50.907497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.173 [2024-09-28 08:45:50.909947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.173 [2024-09-28 08:45:50.910002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:13.173 BaseBdev1 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.173 BaseBdev2_malloc 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.173 true 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.173 [2024-09-28 08:45:50.989876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.173 [2024-09-28 08:45:50.989932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.173 [2024-09-28 08:45:50.989948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:13.173 [2024-09-28 08:45:50.989959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.173 [2024-09-28 08:45:50.992283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.173 [2024-09-28 08:45:50.992323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.173 BaseBdev2 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.173 08:45:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.173 [2024-09-28 08:45:51.001930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.173 [2024-09-28 08:45:51.004036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.174 [2024-09-28 08:45:51.004242] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.174 [2024-09-28 08:45:51.004260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.174 [2024-09-28 08:45:51.004519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.174 [2024-09-28 08:45:51.004750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.174 [2024-09-28 08:45:51.004779] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.174 [2024-09-28 08:45:51.004970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.174 "name": "raid_bdev1", 00:08:13.174 "uuid": "02026fcf-16a8-42b1-9d5d-1626f625a72c", 00:08:13.174 "strip_size_kb": 0, 00:08:13.174 "state": "online", 00:08:13.174 "raid_level": "raid1", 00:08:13.174 "superblock": true, 00:08:13.174 "num_base_bdevs": 2, 00:08:13.174 "num_base_bdevs_discovered": 2, 00:08:13.174 "num_base_bdevs_operational": 2, 00:08:13.174 "base_bdevs_list": [ 00:08:13.174 { 00:08:13.174 "name": "BaseBdev1", 00:08:13.174 "uuid": "e7778ccb-6f94-5b73-831e-cc3ae3687d67", 00:08:13.174 "is_configured": true, 00:08:13.174 "data_offset": 2048, 00:08:13.174 "data_size": 63488 00:08:13.174 }, 00:08:13.174 { 00:08:13.174 "name": "BaseBdev2", 00:08:13.174 "uuid": "be848807-07e2-563e-9b75-60d712d08433", 00:08:13.174 "is_configured": true, 00:08:13.174 "data_offset": 2048, 00:08:13.174 "data_size": 63488 00:08:13.174 } 00:08:13.174 ] 00:08:13.174 }' 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.174 08:45:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.742 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:13.742 08:45:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:13.742 [2024-09-28 08:45:51.566347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.680 [2024-09-28 08:45:52.485066] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:14.680 [2024-09-28 08:45:52.485216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.680 [2024-09-28 08:45:52.485432] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.680 "name": "raid_bdev1", 00:08:14.680 "uuid": "02026fcf-16a8-42b1-9d5d-1626f625a72c", 00:08:14.680 "strip_size_kb": 0, 00:08:14.680 "state": "online", 00:08:14.680 "raid_level": "raid1", 00:08:14.680 "superblock": true, 00:08:14.680 "num_base_bdevs": 2, 00:08:14.680 "num_base_bdevs_discovered": 1, 00:08:14.680 "num_base_bdevs_operational": 1, 00:08:14.680 "base_bdevs_list": [ 00:08:14.680 { 00:08:14.680 "name": null, 00:08:14.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.680 "is_configured": false, 00:08:14.680 "data_offset": 0, 00:08:14.680 "data_size": 63488 00:08:14.680 }, 00:08:14.680 { 00:08:14.680 "name": "BaseBdev2", 00:08:14.680 "uuid": "be848807-07e2-563e-9b75-60d712d08433", 00:08:14.680 "is_configured": true, 00:08:14.680 "data_offset": 2048, 00:08:14.680 "data_size": 63488 00:08:14.680 } 00:08:14.680 ] 00:08:14.680 }' 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.680 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.940 [2024-09-28 08:45:52.914071] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.940 [2024-09-28 08:45:52.914164] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.940 [2024-09-28 08:45:52.916785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.940 [2024-09-28 08:45:52.916867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.940 [2024-09-28 08:45:52.916950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.940 [2024-09-28 08:45:52.916991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.940 { 00:08:14.940 "results": [ 00:08:14.940 { 00:08:14.940 "job": "raid_bdev1", 00:08:14.940 "core_mask": "0x1", 00:08:14.940 "workload": "randrw", 00:08:14.940 "percentage": 50, 00:08:14.940 "status": "finished", 00:08:14.940 "queue_depth": 1, 00:08:14.940 "io_size": 131072, 00:08:14.940 "runtime": 1.348311, 00:08:14.940 "iops": 18358.524109051992, 00:08:14.940 "mibps": 2294.815513631499, 00:08:14.940 "io_failed": 0, 00:08:14.940 "io_timeout": 0, 00:08:14.940 "avg_latency_us": 52.01344864554373, 00:08:14.940 "min_latency_us": 21.128384279475984, 00:08:14.940 "max_latency_us": 1531.0812227074236 00:08:14.940 } 00:08:14.940 ], 00:08:14.940 "core_count": 1 00:08:14.940 } 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63676 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63676 ']' 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63676 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.940 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63676 00:08:15.200 killing process with pid 63676 00:08:15.200 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.200 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.200 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63676' 00:08:15.200 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63676 00:08:15.200 [2024-09-28 08:45:52.958235] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.200 08:45:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63676 00:08:15.200 [2024-09-28 08:45:53.100006] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CbzEML5naY 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:16.582 ************************************ 00:08:16.582 END TEST raid_write_error_test 00:08:16.582 ************************************ 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:16.582 00:08:16.582 real 0m4.580s 00:08:16.582 user 0m5.305s 00:08:16.582 sys 0m0.661s 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.582 08:45:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.582 08:45:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:16.582 08:45:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:16.582 08:45:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:16.582 08:45:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:16.582 08:45:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.582 08:45:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.582 ************************************ 00:08:16.582 START TEST raid_state_function_test 00:08:16.582 ************************************ 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:16.582 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63820 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63820' 00:08:16.843 Process raid pid: 63820 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63820 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63820 ']' 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.843 08:45:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.843 [2024-09-28 08:45:54.667643] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:16.843 [2024-09-28 08:45:54.667900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.103 [2024-09-28 08:45:54.837455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.103 [2024-09-28 08:45:55.091377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.363 [2024-09-28 08:45:55.324466] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.363 [2024-09-28 08:45:55.324598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.624 [2024-09-28 08:45:55.488998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.624 [2024-09-28 08:45:55.489054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.624 [2024-09-28 08:45:55.489064] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.624 [2024-09-28 08:45:55.489074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.624 [2024-09-28 08:45:55.489080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:17.624 [2024-09-28 08:45:55.489090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.624 "name": "Existed_Raid", 00:08:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.624 "strip_size_kb": 64, 00:08:17.624 "state": "configuring", 00:08:17.624 "raid_level": "raid0", 00:08:17.624 "superblock": false, 00:08:17.624 "num_base_bdevs": 3, 00:08:17.624 "num_base_bdevs_discovered": 0, 00:08:17.624 "num_base_bdevs_operational": 3, 00:08:17.624 "base_bdevs_list": [ 00:08:17.624 { 00:08:17.624 "name": "BaseBdev1", 00:08:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.624 "is_configured": false, 00:08:17.624 "data_offset": 0, 00:08:17.624 "data_size": 0 00:08:17.624 }, 00:08:17.624 { 00:08:17.624 "name": "BaseBdev2", 00:08:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.624 "is_configured": false, 00:08:17.624 "data_offset": 0, 00:08:17.624 "data_size": 0 00:08:17.624 }, 00:08:17.624 { 00:08:17.624 "name": "BaseBdev3", 00:08:17.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.624 "is_configured": false, 00:08:17.624 "data_offset": 0, 00:08:17.624 "data_size": 0 00:08:17.624 } 00:08:17.624 ] 00:08:17.624 }' 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.624 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.194 [2024-09-28 08:45:55.948156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.194 [2024-09-28 08:45:55.948243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.194 [2024-09-28 08:45:55.960156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.194 [2024-09-28 08:45:55.960237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.194 [2024-09-28 08:45:55.960264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.194 [2024-09-28 08:45:55.960287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.194 [2024-09-28 08:45:55.960305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.194 [2024-09-28 08:45:55.960326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.194 08:45:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.194 [2024-09-28 08:45:56.040284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.194 BaseBdev1 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.194 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.195 [ 00:08:18.195 { 00:08:18.195 "name": "BaseBdev1", 00:08:18.195 "aliases": [ 00:08:18.195 "47949405-c3f0-46db-a3f1-dab97f67e137" 00:08:18.195 ], 00:08:18.195 "product_name": "Malloc disk", 00:08:18.195 "block_size": 512, 00:08:18.195 "num_blocks": 65536, 00:08:18.195 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:18.195 "assigned_rate_limits": { 00:08:18.195 "rw_ios_per_sec": 0, 00:08:18.195 "rw_mbytes_per_sec": 0, 00:08:18.195 "r_mbytes_per_sec": 0, 00:08:18.195 "w_mbytes_per_sec": 0 00:08:18.195 }, 00:08:18.195 "claimed": true, 00:08:18.195 "claim_type": "exclusive_write", 00:08:18.195 "zoned": false, 00:08:18.195 "supported_io_types": { 00:08:18.195 "read": true, 00:08:18.195 "write": true, 00:08:18.195 "unmap": true, 00:08:18.195 "flush": true, 00:08:18.195 "reset": true, 00:08:18.195 "nvme_admin": false, 00:08:18.195 "nvme_io": false, 00:08:18.195 "nvme_io_md": false, 00:08:18.195 "write_zeroes": true, 00:08:18.195 "zcopy": true, 00:08:18.195 "get_zone_info": false, 00:08:18.195 "zone_management": false, 00:08:18.195 "zone_append": false, 00:08:18.195 "compare": false, 00:08:18.195 "compare_and_write": false, 00:08:18.195 "abort": true, 00:08:18.195 "seek_hole": false, 00:08:18.195 "seek_data": false, 00:08:18.195 "copy": true, 00:08:18.195 "nvme_iov_md": false 00:08:18.195 }, 00:08:18.195 "memory_domains": [ 00:08:18.195 { 00:08:18.195 "dma_device_id": "system", 00:08:18.195 "dma_device_type": 1 00:08:18.195 }, 00:08:18.195 { 00:08:18.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.195 "dma_device_type": 2 00:08:18.195 } 00:08:18.195 ], 00:08:18.195 "driver_specific": {} 00:08:18.195 } 00:08:18.195 ] 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.195 "name": "Existed_Raid", 00:08:18.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.195 "strip_size_kb": 64, 00:08:18.195 "state": "configuring", 00:08:18.195 "raid_level": "raid0", 00:08:18.195 "superblock": false, 00:08:18.195 "num_base_bdevs": 3, 00:08:18.195 "num_base_bdevs_discovered": 1, 00:08:18.195 "num_base_bdevs_operational": 3, 00:08:18.195 "base_bdevs_list": [ 00:08:18.195 { 00:08:18.195 "name": "BaseBdev1", 00:08:18.195 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:18.195 "is_configured": true, 00:08:18.195 "data_offset": 0, 00:08:18.195 "data_size": 65536 00:08:18.195 }, 00:08:18.195 { 00:08:18.195 "name": "BaseBdev2", 00:08:18.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.195 "is_configured": false, 00:08:18.195 "data_offset": 0, 00:08:18.195 "data_size": 0 00:08:18.195 }, 00:08:18.195 { 00:08:18.195 "name": "BaseBdev3", 00:08:18.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.195 "is_configured": false, 00:08:18.195 "data_offset": 0, 00:08:18.195 "data_size": 0 00:08:18.195 } 00:08:18.195 ] 00:08:18.195 }' 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.195 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 [2024-09-28 08:45:56.523491] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.766 [2024-09-28 08:45:56.523547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 [2024-09-28 08:45:56.535511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.766 [2024-09-28 08:45:56.537735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.766 [2024-09-28 08:45:56.537812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.766 [2024-09-28 08:45:56.537848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.766 [2024-09-28 08:45:56.537872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.766 "name": "Existed_Raid", 00:08:18.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.766 "strip_size_kb": 64, 00:08:18.766 "state": "configuring", 00:08:18.766 "raid_level": "raid0", 00:08:18.766 "superblock": false, 00:08:18.766 "num_base_bdevs": 3, 00:08:18.766 "num_base_bdevs_discovered": 1, 00:08:18.766 "num_base_bdevs_operational": 3, 00:08:18.766 "base_bdevs_list": [ 00:08:18.766 { 00:08:18.766 "name": "BaseBdev1", 00:08:18.766 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:18.766 "is_configured": true, 00:08:18.766 "data_offset": 0, 00:08:18.766 "data_size": 65536 00:08:18.766 }, 00:08:18.766 { 00:08:18.766 "name": "BaseBdev2", 00:08:18.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.766 "is_configured": false, 00:08:18.766 "data_offset": 0, 00:08:18.766 "data_size": 0 00:08:18.766 }, 00:08:18.766 { 00:08:18.766 "name": "BaseBdev3", 00:08:18.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.766 "is_configured": false, 00:08:18.766 "data_offset": 0, 00:08:18.766 "data_size": 0 00:08:18.766 } 00:08:18.766 ] 00:08:18.766 }' 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.766 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.026 08:45:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:19.026 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.026 08:45:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.287 [2024-09-28 08:45:57.029880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.287 BaseBdev2 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.287 [ 00:08:19.287 { 00:08:19.287 "name": "BaseBdev2", 00:08:19.287 "aliases": [ 00:08:19.287 "36bae5b9-e911-4c14-af75-030667404e43" 00:08:19.287 ], 00:08:19.287 "product_name": "Malloc disk", 00:08:19.287 "block_size": 512, 00:08:19.287 "num_blocks": 65536, 00:08:19.287 "uuid": "36bae5b9-e911-4c14-af75-030667404e43", 00:08:19.287 "assigned_rate_limits": { 00:08:19.287 "rw_ios_per_sec": 0, 00:08:19.287 "rw_mbytes_per_sec": 0, 00:08:19.287 "r_mbytes_per_sec": 0, 00:08:19.287 "w_mbytes_per_sec": 0 00:08:19.287 }, 00:08:19.287 "claimed": true, 00:08:19.287 "claim_type": "exclusive_write", 00:08:19.287 "zoned": false, 00:08:19.287 "supported_io_types": { 00:08:19.287 "read": true, 00:08:19.287 "write": true, 00:08:19.287 "unmap": true, 00:08:19.287 "flush": true, 00:08:19.287 "reset": true, 00:08:19.287 "nvme_admin": false, 00:08:19.287 "nvme_io": false, 00:08:19.287 "nvme_io_md": false, 00:08:19.287 "write_zeroes": true, 00:08:19.287 "zcopy": true, 00:08:19.287 "get_zone_info": false, 00:08:19.287 "zone_management": false, 00:08:19.287 "zone_append": false, 00:08:19.287 "compare": false, 00:08:19.287 "compare_and_write": false, 00:08:19.287 "abort": true, 00:08:19.287 "seek_hole": false, 00:08:19.287 "seek_data": false, 00:08:19.287 "copy": true, 00:08:19.287 "nvme_iov_md": false 00:08:19.287 }, 00:08:19.287 "memory_domains": [ 00:08:19.287 { 00:08:19.287 "dma_device_id": "system", 00:08:19.287 "dma_device_type": 1 00:08:19.287 }, 00:08:19.287 { 00:08:19.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.287 "dma_device_type": 2 00:08:19.287 } 00:08:19.287 ], 00:08:19.287 "driver_specific": {} 00:08:19.287 } 00:08:19.287 ] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.287 "name": "Existed_Raid", 00:08:19.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.287 "strip_size_kb": 64, 00:08:19.287 "state": "configuring", 00:08:19.287 "raid_level": "raid0", 00:08:19.287 "superblock": false, 00:08:19.287 "num_base_bdevs": 3, 00:08:19.287 "num_base_bdevs_discovered": 2, 00:08:19.287 "num_base_bdevs_operational": 3, 00:08:19.287 "base_bdevs_list": [ 00:08:19.287 { 00:08:19.287 "name": "BaseBdev1", 00:08:19.287 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:19.287 "is_configured": true, 00:08:19.287 "data_offset": 0, 00:08:19.287 "data_size": 65536 00:08:19.287 }, 00:08:19.287 { 00:08:19.287 "name": "BaseBdev2", 00:08:19.287 "uuid": "36bae5b9-e911-4c14-af75-030667404e43", 00:08:19.287 "is_configured": true, 00:08:19.287 "data_offset": 0, 00:08:19.287 "data_size": 65536 00:08:19.287 }, 00:08:19.287 { 00:08:19.287 "name": "BaseBdev3", 00:08:19.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.287 "is_configured": false, 00:08:19.287 "data_offset": 0, 00:08:19.287 "data_size": 0 00:08:19.287 } 00:08:19.287 ] 00:08:19.287 }' 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.287 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.548 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:19.548 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.548 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.808 [2024-09-28 08:45:57.565426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.808 [2024-09-28 08:45:57.565478] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.809 [2024-09-28 08:45:57.565494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:19.809 [2024-09-28 08:45:57.566024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:19.809 [2024-09-28 08:45:57.566227] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.809 [2024-09-28 08:45:57.566246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:19.809 [2024-09-28 08:45:57.566520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.809 BaseBdev3 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.809 [ 00:08:19.809 { 00:08:19.809 "name": "BaseBdev3", 00:08:19.809 "aliases": [ 00:08:19.809 "607188ed-082f-4b33-9791-a191b9b97c80" 00:08:19.809 ], 00:08:19.809 "product_name": "Malloc disk", 00:08:19.809 "block_size": 512, 00:08:19.809 "num_blocks": 65536, 00:08:19.809 "uuid": "607188ed-082f-4b33-9791-a191b9b97c80", 00:08:19.809 "assigned_rate_limits": { 00:08:19.809 "rw_ios_per_sec": 0, 00:08:19.809 "rw_mbytes_per_sec": 0, 00:08:19.809 "r_mbytes_per_sec": 0, 00:08:19.809 "w_mbytes_per_sec": 0 00:08:19.809 }, 00:08:19.809 "claimed": true, 00:08:19.809 "claim_type": "exclusive_write", 00:08:19.809 "zoned": false, 00:08:19.809 "supported_io_types": { 00:08:19.809 "read": true, 00:08:19.809 "write": true, 00:08:19.809 "unmap": true, 00:08:19.809 "flush": true, 00:08:19.809 "reset": true, 00:08:19.809 "nvme_admin": false, 00:08:19.809 "nvme_io": false, 00:08:19.809 "nvme_io_md": false, 00:08:19.809 "write_zeroes": true, 00:08:19.809 "zcopy": true, 00:08:19.809 "get_zone_info": false, 00:08:19.809 "zone_management": false, 00:08:19.809 "zone_append": false, 00:08:19.809 "compare": false, 00:08:19.809 "compare_and_write": false, 00:08:19.809 "abort": true, 00:08:19.809 "seek_hole": false, 00:08:19.809 "seek_data": false, 00:08:19.809 "copy": true, 00:08:19.809 "nvme_iov_md": false 00:08:19.809 }, 00:08:19.809 "memory_domains": [ 00:08:19.809 { 00:08:19.809 "dma_device_id": "system", 00:08:19.809 "dma_device_type": 1 00:08:19.809 }, 00:08:19.809 { 00:08:19.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.809 "dma_device_type": 2 00:08:19.809 } 00:08:19.809 ], 00:08:19.809 "driver_specific": {} 00:08:19.809 } 00:08:19.809 ] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.809 "name": "Existed_Raid", 00:08:19.809 "uuid": "5e4bed4e-12b1-4df7-a1ac-add8435e41ae", 00:08:19.809 "strip_size_kb": 64, 00:08:19.809 "state": "online", 00:08:19.809 "raid_level": "raid0", 00:08:19.809 "superblock": false, 00:08:19.809 "num_base_bdevs": 3, 00:08:19.809 "num_base_bdevs_discovered": 3, 00:08:19.809 "num_base_bdevs_operational": 3, 00:08:19.809 "base_bdevs_list": [ 00:08:19.809 { 00:08:19.809 "name": "BaseBdev1", 00:08:19.809 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:19.809 "is_configured": true, 00:08:19.809 "data_offset": 0, 00:08:19.809 "data_size": 65536 00:08:19.809 }, 00:08:19.809 { 00:08:19.809 "name": "BaseBdev2", 00:08:19.809 "uuid": "36bae5b9-e911-4c14-af75-030667404e43", 00:08:19.809 "is_configured": true, 00:08:19.809 "data_offset": 0, 00:08:19.809 "data_size": 65536 00:08:19.809 }, 00:08:19.809 { 00:08:19.809 "name": "BaseBdev3", 00:08:19.809 "uuid": "607188ed-082f-4b33-9791-a191b9b97c80", 00:08:19.809 "is_configured": true, 00:08:19.809 "data_offset": 0, 00:08:19.809 "data_size": 65536 00:08:19.809 } 00:08:19.809 ] 00:08:19.809 }' 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.809 08:45:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.069 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.069 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.069 08:45:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.069 [2024-09-28 08:45:58.012986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.069 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.069 "name": "Existed_Raid", 00:08:20.069 "aliases": [ 00:08:20.069 "5e4bed4e-12b1-4df7-a1ac-add8435e41ae" 00:08:20.069 ], 00:08:20.069 "product_name": "Raid Volume", 00:08:20.069 "block_size": 512, 00:08:20.069 "num_blocks": 196608, 00:08:20.069 "uuid": "5e4bed4e-12b1-4df7-a1ac-add8435e41ae", 00:08:20.069 "assigned_rate_limits": { 00:08:20.069 "rw_ios_per_sec": 0, 00:08:20.069 "rw_mbytes_per_sec": 0, 00:08:20.069 "r_mbytes_per_sec": 0, 00:08:20.069 "w_mbytes_per_sec": 0 00:08:20.069 }, 00:08:20.069 "claimed": false, 00:08:20.069 "zoned": false, 00:08:20.069 "supported_io_types": { 00:08:20.069 "read": true, 00:08:20.069 "write": true, 00:08:20.069 "unmap": true, 00:08:20.069 "flush": true, 00:08:20.069 "reset": true, 00:08:20.069 "nvme_admin": false, 00:08:20.069 "nvme_io": false, 00:08:20.069 "nvme_io_md": false, 00:08:20.069 "write_zeroes": true, 00:08:20.069 "zcopy": false, 00:08:20.069 "get_zone_info": false, 00:08:20.069 "zone_management": false, 00:08:20.069 "zone_append": false, 00:08:20.069 "compare": false, 00:08:20.069 "compare_and_write": false, 00:08:20.069 "abort": false, 00:08:20.069 "seek_hole": false, 00:08:20.069 "seek_data": false, 00:08:20.069 "copy": false, 00:08:20.069 "nvme_iov_md": false 00:08:20.069 }, 00:08:20.069 "memory_domains": [ 00:08:20.069 { 00:08:20.069 "dma_device_id": "system", 00:08:20.069 "dma_device_type": 1 00:08:20.069 }, 00:08:20.069 { 00:08:20.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.069 "dma_device_type": 2 00:08:20.069 }, 00:08:20.069 { 00:08:20.069 "dma_device_id": "system", 00:08:20.069 "dma_device_type": 1 00:08:20.069 }, 00:08:20.069 { 00:08:20.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.069 "dma_device_type": 2 00:08:20.069 }, 00:08:20.069 { 00:08:20.069 "dma_device_id": "system", 00:08:20.069 "dma_device_type": 1 00:08:20.069 }, 00:08:20.069 { 00:08:20.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.069 "dma_device_type": 2 00:08:20.069 } 00:08:20.069 ], 00:08:20.069 "driver_specific": { 00:08:20.069 "raid": { 00:08:20.069 "uuid": "5e4bed4e-12b1-4df7-a1ac-add8435e41ae", 00:08:20.069 "strip_size_kb": 64, 00:08:20.069 "state": "online", 00:08:20.069 "raid_level": "raid0", 00:08:20.069 "superblock": false, 00:08:20.069 "num_base_bdevs": 3, 00:08:20.069 "num_base_bdevs_discovered": 3, 00:08:20.069 "num_base_bdevs_operational": 3, 00:08:20.069 "base_bdevs_list": [ 00:08:20.069 { 00:08:20.069 "name": "BaseBdev1", 00:08:20.069 "uuid": "47949405-c3f0-46db-a3f1-dab97f67e137", 00:08:20.069 "is_configured": true, 00:08:20.069 "data_offset": 0, 00:08:20.069 "data_size": 65536 00:08:20.069 }, 00:08:20.069 { 00:08:20.070 "name": "BaseBdev2", 00:08:20.070 "uuid": "36bae5b9-e911-4c14-af75-030667404e43", 00:08:20.070 "is_configured": true, 00:08:20.070 "data_offset": 0, 00:08:20.070 "data_size": 65536 00:08:20.070 }, 00:08:20.070 { 00:08:20.070 "name": "BaseBdev3", 00:08:20.070 "uuid": "607188ed-082f-4b33-9791-a191b9b97c80", 00:08:20.070 "is_configured": true, 00:08:20.070 "data_offset": 0, 00:08:20.070 "data_size": 65536 00:08:20.070 } 00:08:20.070 ] 00:08:20.070 } 00:08:20.070 } 00:08:20.070 }' 00:08:20.070 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:20.330 BaseBdev2 00:08:20.330 BaseBdev3' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.330 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.330 [2024-09-28 08:45:58.264262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.330 [2024-09-28 08:45:58.264294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.330 [2024-09-28 08:45:58.264353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.591 "name": "Existed_Raid", 00:08:20.591 "uuid": "5e4bed4e-12b1-4df7-a1ac-add8435e41ae", 00:08:20.591 "strip_size_kb": 64, 00:08:20.591 "state": "offline", 00:08:20.591 "raid_level": "raid0", 00:08:20.591 "superblock": false, 00:08:20.591 "num_base_bdevs": 3, 00:08:20.591 "num_base_bdevs_discovered": 2, 00:08:20.591 "num_base_bdevs_operational": 2, 00:08:20.591 "base_bdevs_list": [ 00:08:20.591 { 00:08:20.591 "name": null, 00:08:20.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.591 "is_configured": false, 00:08:20.591 "data_offset": 0, 00:08:20.591 "data_size": 65536 00:08:20.591 }, 00:08:20.591 { 00:08:20.591 "name": "BaseBdev2", 00:08:20.591 "uuid": "36bae5b9-e911-4c14-af75-030667404e43", 00:08:20.591 "is_configured": true, 00:08:20.591 "data_offset": 0, 00:08:20.591 "data_size": 65536 00:08:20.591 }, 00:08:20.591 { 00:08:20.591 "name": "BaseBdev3", 00:08:20.591 "uuid": "607188ed-082f-4b33-9791-a191b9b97c80", 00:08:20.591 "is_configured": true, 00:08:20.591 "data_offset": 0, 00:08:20.591 "data_size": 65536 00:08:20.591 } 00:08:20.591 ] 00:08:20.591 }' 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.591 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.851 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.851 [2024-09-28 08:45:58.814702] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.111 08:45:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.111 [2024-09-28 08:45:58.976476] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:21.111 [2024-09-28 08:45:58.976539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.111 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.372 BaseBdev2 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.372 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 [ 00:08:21.373 { 00:08:21.373 "name": "BaseBdev2", 00:08:21.373 "aliases": [ 00:08:21.373 "d50317e5-a2b4-4074-a317-878959391c6f" 00:08:21.373 ], 00:08:21.373 "product_name": "Malloc disk", 00:08:21.373 "block_size": 512, 00:08:21.373 "num_blocks": 65536, 00:08:21.373 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:21.373 "assigned_rate_limits": { 00:08:21.373 "rw_ios_per_sec": 0, 00:08:21.373 "rw_mbytes_per_sec": 0, 00:08:21.373 "r_mbytes_per_sec": 0, 00:08:21.373 "w_mbytes_per_sec": 0 00:08:21.373 }, 00:08:21.373 "claimed": false, 00:08:21.373 "zoned": false, 00:08:21.373 "supported_io_types": { 00:08:21.373 "read": true, 00:08:21.373 "write": true, 00:08:21.373 "unmap": true, 00:08:21.373 "flush": true, 00:08:21.373 "reset": true, 00:08:21.373 "nvme_admin": false, 00:08:21.373 "nvme_io": false, 00:08:21.373 "nvme_io_md": false, 00:08:21.373 "write_zeroes": true, 00:08:21.373 "zcopy": true, 00:08:21.373 "get_zone_info": false, 00:08:21.373 "zone_management": false, 00:08:21.373 "zone_append": false, 00:08:21.373 "compare": false, 00:08:21.373 "compare_and_write": false, 00:08:21.373 "abort": true, 00:08:21.373 "seek_hole": false, 00:08:21.373 "seek_data": false, 00:08:21.373 "copy": true, 00:08:21.373 "nvme_iov_md": false 00:08:21.373 }, 00:08:21.373 "memory_domains": [ 00:08:21.373 { 00:08:21.373 "dma_device_id": "system", 00:08:21.373 "dma_device_type": 1 00:08:21.373 }, 00:08:21.373 { 00:08:21.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.373 "dma_device_type": 2 00:08:21.373 } 00:08:21.373 ], 00:08:21.373 "driver_specific": {} 00:08:21.373 } 00:08:21.373 ] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 BaseBdev3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 [ 00:08:21.373 { 00:08:21.373 "name": "BaseBdev3", 00:08:21.373 "aliases": [ 00:08:21.373 "144bea6b-22a8-48ad-bc0e-b73eaeb1790f" 00:08:21.373 ], 00:08:21.373 "product_name": "Malloc disk", 00:08:21.373 "block_size": 512, 00:08:21.373 "num_blocks": 65536, 00:08:21.373 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:21.373 "assigned_rate_limits": { 00:08:21.373 "rw_ios_per_sec": 0, 00:08:21.373 "rw_mbytes_per_sec": 0, 00:08:21.373 "r_mbytes_per_sec": 0, 00:08:21.373 "w_mbytes_per_sec": 0 00:08:21.373 }, 00:08:21.373 "claimed": false, 00:08:21.373 "zoned": false, 00:08:21.373 "supported_io_types": { 00:08:21.373 "read": true, 00:08:21.373 "write": true, 00:08:21.373 "unmap": true, 00:08:21.373 "flush": true, 00:08:21.373 "reset": true, 00:08:21.373 "nvme_admin": false, 00:08:21.373 "nvme_io": false, 00:08:21.373 "nvme_io_md": false, 00:08:21.373 "write_zeroes": true, 00:08:21.373 "zcopy": true, 00:08:21.373 "get_zone_info": false, 00:08:21.373 "zone_management": false, 00:08:21.373 "zone_append": false, 00:08:21.373 "compare": false, 00:08:21.373 "compare_and_write": false, 00:08:21.373 "abort": true, 00:08:21.373 "seek_hole": false, 00:08:21.373 "seek_data": false, 00:08:21.373 "copy": true, 00:08:21.373 "nvme_iov_md": false 00:08:21.373 }, 00:08:21.373 "memory_domains": [ 00:08:21.373 { 00:08:21.373 "dma_device_id": "system", 00:08:21.373 "dma_device_type": 1 00:08:21.373 }, 00:08:21.373 { 00:08:21.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.373 "dma_device_type": 2 00:08:21.373 } 00:08:21.373 ], 00:08:21.373 "driver_specific": {} 00:08:21.373 } 00:08:21.373 ] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 [2024-09-28 08:45:59.286352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.373 [2024-09-28 08:45:59.286415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.373 [2024-09-28 08:45:59.286437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.373 [2024-09-28 08:45:59.288459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.373 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.373 "name": "Existed_Raid", 00:08:21.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.373 "strip_size_kb": 64, 00:08:21.373 "state": "configuring", 00:08:21.373 "raid_level": "raid0", 00:08:21.373 "superblock": false, 00:08:21.373 "num_base_bdevs": 3, 00:08:21.373 "num_base_bdevs_discovered": 2, 00:08:21.373 "num_base_bdevs_operational": 3, 00:08:21.373 "base_bdevs_list": [ 00:08:21.373 { 00:08:21.373 "name": "BaseBdev1", 00:08:21.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.373 "is_configured": false, 00:08:21.373 "data_offset": 0, 00:08:21.373 "data_size": 0 00:08:21.373 }, 00:08:21.373 { 00:08:21.373 "name": "BaseBdev2", 00:08:21.373 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:21.373 "is_configured": true, 00:08:21.373 "data_offset": 0, 00:08:21.373 "data_size": 65536 00:08:21.373 }, 00:08:21.374 { 00:08:21.374 "name": "BaseBdev3", 00:08:21.374 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:21.374 "is_configured": true, 00:08:21.374 "data_offset": 0, 00:08:21.374 "data_size": 65536 00:08:21.374 } 00:08:21.374 ] 00:08:21.374 }' 00:08:21.374 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.374 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.944 [2024-09-28 08:45:59.725633] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.944 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.944 "name": "Existed_Raid", 00:08:21.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.944 "strip_size_kb": 64, 00:08:21.944 "state": "configuring", 00:08:21.944 "raid_level": "raid0", 00:08:21.944 "superblock": false, 00:08:21.944 "num_base_bdevs": 3, 00:08:21.944 "num_base_bdevs_discovered": 1, 00:08:21.944 "num_base_bdevs_operational": 3, 00:08:21.944 "base_bdevs_list": [ 00:08:21.944 { 00:08:21.944 "name": "BaseBdev1", 00:08:21.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.944 "is_configured": false, 00:08:21.944 "data_offset": 0, 00:08:21.944 "data_size": 0 00:08:21.944 }, 00:08:21.944 { 00:08:21.944 "name": null, 00:08:21.944 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:21.944 "is_configured": false, 00:08:21.944 "data_offset": 0, 00:08:21.944 "data_size": 65536 00:08:21.944 }, 00:08:21.944 { 00:08:21.944 "name": "BaseBdev3", 00:08:21.944 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:21.944 "is_configured": true, 00:08:21.944 "data_offset": 0, 00:08:21.944 "data_size": 65536 00:08:21.945 } 00:08:21.945 ] 00:08:21.945 }' 00:08:21.945 08:45:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.945 08:45:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.205 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.464 [2024-09-28 08:46:00.229912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.464 BaseBdev1 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.464 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.465 [ 00:08:22.465 { 00:08:22.465 "name": "BaseBdev1", 00:08:22.465 "aliases": [ 00:08:22.465 "49fdfb04-6159-4247-adc4-7901b24491e9" 00:08:22.465 ], 00:08:22.465 "product_name": "Malloc disk", 00:08:22.465 "block_size": 512, 00:08:22.465 "num_blocks": 65536, 00:08:22.465 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:22.465 "assigned_rate_limits": { 00:08:22.465 "rw_ios_per_sec": 0, 00:08:22.465 "rw_mbytes_per_sec": 0, 00:08:22.465 "r_mbytes_per_sec": 0, 00:08:22.465 "w_mbytes_per_sec": 0 00:08:22.465 }, 00:08:22.465 "claimed": true, 00:08:22.465 "claim_type": "exclusive_write", 00:08:22.465 "zoned": false, 00:08:22.465 "supported_io_types": { 00:08:22.465 "read": true, 00:08:22.465 "write": true, 00:08:22.465 "unmap": true, 00:08:22.465 "flush": true, 00:08:22.465 "reset": true, 00:08:22.465 "nvme_admin": false, 00:08:22.465 "nvme_io": false, 00:08:22.465 "nvme_io_md": false, 00:08:22.465 "write_zeroes": true, 00:08:22.465 "zcopy": true, 00:08:22.465 "get_zone_info": false, 00:08:22.465 "zone_management": false, 00:08:22.465 "zone_append": false, 00:08:22.465 "compare": false, 00:08:22.465 "compare_and_write": false, 00:08:22.465 "abort": true, 00:08:22.465 "seek_hole": false, 00:08:22.465 "seek_data": false, 00:08:22.465 "copy": true, 00:08:22.465 "nvme_iov_md": false 00:08:22.465 }, 00:08:22.465 "memory_domains": [ 00:08:22.465 { 00:08:22.465 "dma_device_id": "system", 00:08:22.465 "dma_device_type": 1 00:08:22.465 }, 00:08:22.465 { 00:08:22.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.465 "dma_device_type": 2 00:08:22.465 } 00:08:22.465 ], 00:08:22.465 "driver_specific": {} 00:08:22.465 } 00:08:22.465 ] 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.465 "name": "Existed_Raid", 00:08:22.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.465 "strip_size_kb": 64, 00:08:22.465 "state": "configuring", 00:08:22.465 "raid_level": "raid0", 00:08:22.465 "superblock": false, 00:08:22.465 "num_base_bdevs": 3, 00:08:22.465 "num_base_bdevs_discovered": 2, 00:08:22.465 "num_base_bdevs_operational": 3, 00:08:22.465 "base_bdevs_list": [ 00:08:22.465 { 00:08:22.465 "name": "BaseBdev1", 00:08:22.465 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:22.465 "is_configured": true, 00:08:22.465 "data_offset": 0, 00:08:22.465 "data_size": 65536 00:08:22.465 }, 00:08:22.465 { 00:08:22.465 "name": null, 00:08:22.465 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:22.465 "is_configured": false, 00:08:22.465 "data_offset": 0, 00:08:22.465 "data_size": 65536 00:08:22.465 }, 00:08:22.465 { 00:08:22.465 "name": "BaseBdev3", 00:08:22.465 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:22.465 "is_configured": true, 00:08:22.465 "data_offset": 0, 00:08:22.465 "data_size": 65536 00:08:22.465 } 00:08:22.465 ] 00:08:22.465 }' 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.465 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.036 [2024-09-28 08:46:00.769080] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.036 "name": "Existed_Raid", 00:08:23.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.036 "strip_size_kb": 64, 00:08:23.036 "state": "configuring", 00:08:23.036 "raid_level": "raid0", 00:08:23.036 "superblock": false, 00:08:23.036 "num_base_bdevs": 3, 00:08:23.036 "num_base_bdevs_discovered": 1, 00:08:23.036 "num_base_bdevs_operational": 3, 00:08:23.036 "base_bdevs_list": [ 00:08:23.036 { 00:08:23.036 "name": "BaseBdev1", 00:08:23.036 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:23.036 "is_configured": true, 00:08:23.036 "data_offset": 0, 00:08:23.036 "data_size": 65536 00:08:23.036 }, 00:08:23.036 { 00:08:23.036 "name": null, 00:08:23.036 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:23.036 "is_configured": false, 00:08:23.036 "data_offset": 0, 00:08:23.036 "data_size": 65536 00:08:23.036 }, 00:08:23.036 { 00:08:23.036 "name": null, 00:08:23.036 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:23.036 "is_configured": false, 00:08:23.036 "data_offset": 0, 00:08:23.036 "data_size": 65536 00:08:23.036 } 00:08:23.036 ] 00:08:23.036 }' 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.036 08:46:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.296 [2024-09-28 08:46:01.188338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.296 "name": "Existed_Raid", 00:08:23.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.296 "strip_size_kb": 64, 00:08:23.296 "state": "configuring", 00:08:23.296 "raid_level": "raid0", 00:08:23.296 "superblock": false, 00:08:23.296 "num_base_bdevs": 3, 00:08:23.296 "num_base_bdevs_discovered": 2, 00:08:23.296 "num_base_bdevs_operational": 3, 00:08:23.296 "base_bdevs_list": [ 00:08:23.296 { 00:08:23.296 "name": "BaseBdev1", 00:08:23.296 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:23.296 "is_configured": true, 00:08:23.296 "data_offset": 0, 00:08:23.296 "data_size": 65536 00:08:23.296 }, 00:08:23.296 { 00:08:23.296 "name": null, 00:08:23.296 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:23.296 "is_configured": false, 00:08:23.296 "data_offset": 0, 00:08:23.296 "data_size": 65536 00:08:23.296 }, 00:08:23.296 { 00:08:23.296 "name": "BaseBdev3", 00:08:23.296 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:23.296 "is_configured": true, 00:08:23.296 "data_offset": 0, 00:08:23.296 "data_size": 65536 00:08:23.296 } 00:08:23.296 ] 00:08:23.296 }' 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.296 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.867 [2024-09-28 08:46:01.667592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.867 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.868 "name": "Existed_Raid", 00:08:23.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.868 "strip_size_kb": 64, 00:08:23.868 "state": "configuring", 00:08:23.868 "raid_level": "raid0", 00:08:23.868 "superblock": false, 00:08:23.868 "num_base_bdevs": 3, 00:08:23.868 "num_base_bdevs_discovered": 1, 00:08:23.868 "num_base_bdevs_operational": 3, 00:08:23.868 "base_bdevs_list": [ 00:08:23.868 { 00:08:23.868 "name": null, 00:08:23.868 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:23.868 "is_configured": false, 00:08:23.868 "data_offset": 0, 00:08:23.868 "data_size": 65536 00:08:23.868 }, 00:08:23.868 { 00:08:23.868 "name": null, 00:08:23.868 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:23.868 "is_configured": false, 00:08:23.868 "data_offset": 0, 00:08:23.868 "data_size": 65536 00:08:23.868 }, 00:08:23.868 { 00:08:23.868 "name": "BaseBdev3", 00:08:23.868 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:23.868 "is_configured": true, 00:08:23.868 "data_offset": 0, 00:08:23.868 "data_size": 65536 00:08:23.868 } 00:08:23.868 ] 00:08:23.868 }' 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.868 08:46:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 [2024-09-28 08:46:02.204507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.438 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.439 "name": "Existed_Raid", 00:08:24.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.439 "strip_size_kb": 64, 00:08:24.439 "state": "configuring", 00:08:24.439 "raid_level": "raid0", 00:08:24.439 "superblock": false, 00:08:24.439 "num_base_bdevs": 3, 00:08:24.439 "num_base_bdevs_discovered": 2, 00:08:24.439 "num_base_bdevs_operational": 3, 00:08:24.439 "base_bdevs_list": [ 00:08:24.439 { 00:08:24.439 "name": null, 00:08:24.439 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:24.439 "is_configured": false, 00:08:24.439 "data_offset": 0, 00:08:24.439 "data_size": 65536 00:08:24.439 }, 00:08:24.439 { 00:08:24.439 "name": "BaseBdev2", 00:08:24.439 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:24.439 "is_configured": true, 00:08:24.439 "data_offset": 0, 00:08:24.439 "data_size": 65536 00:08:24.439 }, 00:08:24.439 { 00:08:24.439 "name": "BaseBdev3", 00:08:24.439 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:24.439 "is_configured": true, 00:08:24.439 "data_offset": 0, 00:08:24.439 "data_size": 65536 00:08:24.439 } 00:08:24.439 ] 00:08:24.439 }' 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.439 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.699 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.699 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.699 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.699 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.699 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49fdfb04-6159-4247-adc4-7901b24491e9 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 [2024-09-28 08:46:02.800566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:24.959 [2024-09-28 08:46:02.800613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:24.959 [2024-09-28 08:46:02.800623] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:24.959 [2024-09-28 08:46:02.800948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:24.959 [2024-09-28 08:46:02.801162] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:24.959 [2024-09-28 08:46:02.801177] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:24.959 [2024-09-28 08:46:02.801442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.959 NewBaseBdev 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 [ 00:08:24.959 { 00:08:24.959 "name": "NewBaseBdev", 00:08:24.959 "aliases": [ 00:08:24.959 "49fdfb04-6159-4247-adc4-7901b24491e9" 00:08:24.959 ], 00:08:24.959 "product_name": "Malloc disk", 00:08:24.959 "block_size": 512, 00:08:24.959 "num_blocks": 65536, 00:08:24.959 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:24.959 "assigned_rate_limits": { 00:08:24.959 "rw_ios_per_sec": 0, 00:08:24.959 "rw_mbytes_per_sec": 0, 00:08:24.959 "r_mbytes_per_sec": 0, 00:08:24.959 "w_mbytes_per_sec": 0 00:08:24.959 }, 00:08:24.959 "claimed": true, 00:08:24.959 "claim_type": "exclusive_write", 00:08:24.959 "zoned": false, 00:08:24.959 "supported_io_types": { 00:08:24.959 "read": true, 00:08:24.959 "write": true, 00:08:24.959 "unmap": true, 00:08:24.959 "flush": true, 00:08:24.959 "reset": true, 00:08:24.959 "nvme_admin": false, 00:08:24.959 "nvme_io": false, 00:08:24.959 "nvme_io_md": false, 00:08:24.959 "write_zeroes": true, 00:08:24.959 "zcopy": true, 00:08:24.959 "get_zone_info": false, 00:08:24.959 "zone_management": false, 00:08:24.959 "zone_append": false, 00:08:24.959 "compare": false, 00:08:24.959 "compare_and_write": false, 00:08:24.959 "abort": true, 00:08:24.959 "seek_hole": false, 00:08:24.959 "seek_data": false, 00:08:24.959 "copy": true, 00:08:24.959 "nvme_iov_md": false 00:08:24.959 }, 00:08:24.959 "memory_domains": [ 00:08:24.959 { 00:08:24.959 "dma_device_id": "system", 00:08:24.959 "dma_device_type": 1 00:08:24.959 }, 00:08:24.959 { 00:08:24.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.959 "dma_device_type": 2 00:08:24.959 } 00:08:24.959 ], 00:08:24.959 "driver_specific": {} 00:08:24.959 } 00:08:24.959 ] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.959 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.959 "name": "Existed_Raid", 00:08:24.959 "uuid": "3854beb1-6828-425b-a9fd-b8a2633e2664", 00:08:24.959 "strip_size_kb": 64, 00:08:24.960 "state": "online", 00:08:24.960 "raid_level": "raid0", 00:08:24.960 "superblock": false, 00:08:24.960 "num_base_bdevs": 3, 00:08:24.960 "num_base_bdevs_discovered": 3, 00:08:24.960 "num_base_bdevs_operational": 3, 00:08:24.960 "base_bdevs_list": [ 00:08:24.960 { 00:08:24.960 "name": "NewBaseBdev", 00:08:24.960 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:24.960 "is_configured": true, 00:08:24.960 "data_offset": 0, 00:08:24.960 "data_size": 65536 00:08:24.960 }, 00:08:24.960 { 00:08:24.960 "name": "BaseBdev2", 00:08:24.960 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:24.960 "is_configured": true, 00:08:24.960 "data_offset": 0, 00:08:24.960 "data_size": 65536 00:08:24.960 }, 00:08:24.960 { 00:08:24.960 "name": "BaseBdev3", 00:08:24.960 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:24.960 "is_configured": true, 00:08:24.960 "data_offset": 0, 00:08:24.960 "data_size": 65536 00:08:24.960 } 00:08:24.960 ] 00:08:24.960 }' 00:08:24.960 08:46:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.960 08:46:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.532 [2024-09-28 08:46:03.252111] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.532 "name": "Existed_Raid", 00:08:25.532 "aliases": [ 00:08:25.532 "3854beb1-6828-425b-a9fd-b8a2633e2664" 00:08:25.532 ], 00:08:25.532 "product_name": "Raid Volume", 00:08:25.532 "block_size": 512, 00:08:25.532 "num_blocks": 196608, 00:08:25.532 "uuid": "3854beb1-6828-425b-a9fd-b8a2633e2664", 00:08:25.532 "assigned_rate_limits": { 00:08:25.532 "rw_ios_per_sec": 0, 00:08:25.532 "rw_mbytes_per_sec": 0, 00:08:25.532 "r_mbytes_per_sec": 0, 00:08:25.532 "w_mbytes_per_sec": 0 00:08:25.532 }, 00:08:25.532 "claimed": false, 00:08:25.532 "zoned": false, 00:08:25.532 "supported_io_types": { 00:08:25.532 "read": true, 00:08:25.532 "write": true, 00:08:25.532 "unmap": true, 00:08:25.532 "flush": true, 00:08:25.532 "reset": true, 00:08:25.532 "nvme_admin": false, 00:08:25.532 "nvme_io": false, 00:08:25.532 "nvme_io_md": false, 00:08:25.532 "write_zeroes": true, 00:08:25.532 "zcopy": false, 00:08:25.532 "get_zone_info": false, 00:08:25.532 "zone_management": false, 00:08:25.532 "zone_append": false, 00:08:25.532 "compare": false, 00:08:25.532 "compare_and_write": false, 00:08:25.532 "abort": false, 00:08:25.532 "seek_hole": false, 00:08:25.532 "seek_data": false, 00:08:25.532 "copy": false, 00:08:25.532 "nvme_iov_md": false 00:08:25.532 }, 00:08:25.532 "memory_domains": [ 00:08:25.532 { 00:08:25.532 "dma_device_id": "system", 00:08:25.532 "dma_device_type": 1 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.532 "dma_device_type": 2 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "dma_device_id": "system", 00:08:25.532 "dma_device_type": 1 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.532 "dma_device_type": 2 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "dma_device_id": "system", 00:08:25.532 "dma_device_type": 1 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.532 "dma_device_type": 2 00:08:25.532 } 00:08:25.532 ], 00:08:25.532 "driver_specific": { 00:08:25.532 "raid": { 00:08:25.532 "uuid": "3854beb1-6828-425b-a9fd-b8a2633e2664", 00:08:25.532 "strip_size_kb": 64, 00:08:25.532 "state": "online", 00:08:25.532 "raid_level": "raid0", 00:08:25.532 "superblock": false, 00:08:25.532 "num_base_bdevs": 3, 00:08:25.532 "num_base_bdevs_discovered": 3, 00:08:25.532 "num_base_bdevs_operational": 3, 00:08:25.532 "base_bdevs_list": [ 00:08:25.532 { 00:08:25.532 "name": "NewBaseBdev", 00:08:25.532 "uuid": "49fdfb04-6159-4247-adc4-7901b24491e9", 00:08:25.532 "is_configured": true, 00:08:25.532 "data_offset": 0, 00:08:25.532 "data_size": 65536 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "name": "BaseBdev2", 00:08:25.532 "uuid": "d50317e5-a2b4-4074-a317-878959391c6f", 00:08:25.532 "is_configured": true, 00:08:25.532 "data_offset": 0, 00:08:25.532 "data_size": 65536 00:08:25.532 }, 00:08:25.532 { 00:08:25.532 "name": "BaseBdev3", 00:08:25.532 "uuid": "144bea6b-22a8-48ad-bc0e-b73eaeb1790f", 00:08:25.532 "is_configured": true, 00:08:25.532 "data_offset": 0, 00:08:25.532 "data_size": 65536 00:08:25.532 } 00:08:25.532 ] 00:08:25.532 } 00:08:25.532 } 00:08:25.532 }' 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:25.532 BaseBdev2 00:08:25.532 BaseBdev3' 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.532 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.533 [2024-09-28 08:46:03.475399] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.533 [2024-09-28 08:46:03.475429] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.533 [2024-09-28 08:46:03.475501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.533 [2024-09-28 08:46:03.475555] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.533 [2024-09-28 08:46:03.475570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63820 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63820 ']' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63820 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63820 00:08:25.533 killing process with pid 63820 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63820' 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63820 00:08:25.533 [2024-09-28 08:46:03.521908] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.533 08:46:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63820 00:08:26.151 [2024-09-28 08:46:03.845814] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.532 ************************************ 00:08:27.532 END TEST raid_state_function_test 00:08:27.532 ************************************ 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:27.532 00:08:27.532 real 0m10.613s 00:08:27.532 user 0m16.487s 00:08:27.532 sys 0m1.949s 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.532 08:46:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:27.532 08:46:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:27.532 08:46:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.532 08:46:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.532 ************************************ 00:08:27.532 START TEST raid_state_function_test_sb 00:08:27.532 ************************************ 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.532 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64442 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64442' 00:08:27.533 Process raid pid: 64442 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64442 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64442 ']' 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.533 08:46:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.533 [2024-09-28 08:46:05.356045] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:27.533 [2024-09-28 08:46:05.356162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.533 [2024-09-28 08:46:05.522813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.793 [2024-09-28 08:46:05.773040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.052 [2024-09-28 08:46:06.007204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.052 [2024-09-28 08:46:06.007241] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.311 [2024-09-28 08:46:06.184287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.311 [2024-09-28 08:46:06.184358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.311 [2024-09-28 08:46:06.184368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.311 [2024-09-28 08:46:06.184378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.311 [2024-09-28 08:46:06.184385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.311 [2024-09-28 08:46:06.184395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.311 "name": "Existed_Raid", 00:08:28.311 "uuid": "a58293af-2edb-4e84-b7ee-d5cc09bd7d54", 00:08:28.311 "strip_size_kb": 64, 00:08:28.311 "state": "configuring", 00:08:28.311 "raid_level": "raid0", 00:08:28.311 "superblock": true, 00:08:28.311 "num_base_bdevs": 3, 00:08:28.311 "num_base_bdevs_discovered": 0, 00:08:28.311 "num_base_bdevs_operational": 3, 00:08:28.311 "base_bdevs_list": [ 00:08:28.311 { 00:08:28.311 "name": "BaseBdev1", 00:08:28.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.311 "is_configured": false, 00:08:28.311 "data_offset": 0, 00:08:28.311 "data_size": 0 00:08:28.311 }, 00:08:28.311 { 00:08:28.311 "name": "BaseBdev2", 00:08:28.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.311 "is_configured": false, 00:08:28.311 "data_offset": 0, 00:08:28.311 "data_size": 0 00:08:28.311 }, 00:08:28.311 { 00:08:28.311 "name": "BaseBdev3", 00:08:28.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.311 "is_configured": false, 00:08:28.311 "data_offset": 0, 00:08:28.311 "data_size": 0 00:08:28.311 } 00:08:28.311 ] 00:08:28.311 }' 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.311 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [2024-09-28 08:46:06.655370] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.882 [2024-09-28 08:46:06.655416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [2024-09-28 08:46:06.667379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.882 [2024-09-28 08:46:06.667425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.882 [2024-09-28 08:46:06.667433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.882 [2024-09-28 08:46:06.667443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.882 [2024-09-28 08:46:06.667449] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.882 [2024-09-28 08:46:06.667458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [2024-09-28 08:46:06.750168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.882 BaseBdev1 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [ 00:08:28.882 { 00:08:28.882 "name": "BaseBdev1", 00:08:28.882 "aliases": [ 00:08:28.882 "f0cba19b-71fb-4a3c-b598-4d357e42c6ac" 00:08:28.882 ], 00:08:28.882 "product_name": "Malloc disk", 00:08:28.882 "block_size": 512, 00:08:28.882 "num_blocks": 65536, 00:08:28.882 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:28.882 "assigned_rate_limits": { 00:08:28.882 "rw_ios_per_sec": 0, 00:08:28.882 "rw_mbytes_per_sec": 0, 00:08:28.882 "r_mbytes_per_sec": 0, 00:08:28.882 "w_mbytes_per_sec": 0 00:08:28.882 }, 00:08:28.882 "claimed": true, 00:08:28.882 "claim_type": "exclusive_write", 00:08:28.882 "zoned": false, 00:08:28.882 "supported_io_types": { 00:08:28.882 "read": true, 00:08:28.882 "write": true, 00:08:28.882 "unmap": true, 00:08:28.882 "flush": true, 00:08:28.882 "reset": true, 00:08:28.882 "nvme_admin": false, 00:08:28.882 "nvme_io": false, 00:08:28.882 "nvme_io_md": false, 00:08:28.882 "write_zeroes": true, 00:08:28.882 "zcopy": true, 00:08:28.882 "get_zone_info": false, 00:08:28.882 "zone_management": false, 00:08:28.882 "zone_append": false, 00:08:28.882 "compare": false, 00:08:28.882 "compare_and_write": false, 00:08:28.882 "abort": true, 00:08:28.882 "seek_hole": false, 00:08:28.882 "seek_data": false, 00:08:28.882 "copy": true, 00:08:28.882 "nvme_iov_md": false 00:08:28.882 }, 00:08:28.882 "memory_domains": [ 00:08:28.882 { 00:08:28.882 "dma_device_id": "system", 00:08:28.882 "dma_device_type": 1 00:08:28.882 }, 00:08:28.882 { 00:08:28.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.882 "dma_device_type": 2 00:08:28.882 } 00:08:28.882 ], 00:08:28.882 "driver_specific": {} 00:08:28.882 } 00:08:28.882 ] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.882 "name": "Existed_Raid", 00:08:28.882 "uuid": "46cd3a81-fa18-44e5-8a45-ab9d059a8efe", 00:08:28.882 "strip_size_kb": 64, 00:08:28.882 "state": "configuring", 00:08:28.882 "raid_level": "raid0", 00:08:28.882 "superblock": true, 00:08:28.882 "num_base_bdevs": 3, 00:08:28.882 "num_base_bdevs_discovered": 1, 00:08:28.882 "num_base_bdevs_operational": 3, 00:08:28.882 "base_bdevs_list": [ 00:08:28.882 { 00:08:28.882 "name": "BaseBdev1", 00:08:28.882 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:28.882 "is_configured": true, 00:08:28.882 "data_offset": 2048, 00:08:28.882 "data_size": 63488 00:08:28.882 }, 00:08:28.882 { 00:08:28.882 "name": "BaseBdev2", 00:08:28.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.882 "is_configured": false, 00:08:28.882 "data_offset": 0, 00:08:28.882 "data_size": 0 00:08:28.882 }, 00:08:28.882 { 00:08:28.882 "name": "BaseBdev3", 00:08:28.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.882 "is_configured": false, 00:08:28.882 "data_offset": 0, 00:08:28.882 "data_size": 0 00:08:28.882 } 00:08:28.882 ] 00:08:28.882 }' 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.882 08:46:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.453 [2024-09-28 08:46:07.201423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.453 [2024-09-28 08:46:07.201480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.453 [2024-09-28 08:46:07.209460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.453 [2024-09-28 08:46:07.211594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.453 [2024-09-28 08:46:07.211639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.453 [2024-09-28 08:46:07.211658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.453 [2024-09-28 08:46:07.211667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.453 "name": "Existed_Raid", 00:08:29.453 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:29.453 "strip_size_kb": 64, 00:08:29.453 "state": "configuring", 00:08:29.453 "raid_level": "raid0", 00:08:29.453 "superblock": true, 00:08:29.453 "num_base_bdevs": 3, 00:08:29.453 "num_base_bdevs_discovered": 1, 00:08:29.453 "num_base_bdevs_operational": 3, 00:08:29.453 "base_bdevs_list": [ 00:08:29.453 { 00:08:29.453 "name": "BaseBdev1", 00:08:29.453 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:29.453 "is_configured": true, 00:08:29.453 "data_offset": 2048, 00:08:29.453 "data_size": 63488 00:08:29.453 }, 00:08:29.453 { 00:08:29.453 "name": "BaseBdev2", 00:08:29.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.453 "is_configured": false, 00:08:29.453 "data_offset": 0, 00:08:29.453 "data_size": 0 00:08:29.453 }, 00:08:29.453 { 00:08:29.453 "name": "BaseBdev3", 00:08:29.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.453 "is_configured": false, 00:08:29.453 "data_offset": 0, 00:08:29.453 "data_size": 0 00:08:29.453 } 00:08:29.453 ] 00:08:29.453 }' 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.453 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.713 [2024-09-28 08:46:07.643891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.713 BaseBdev2 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.713 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.713 [ 00:08:29.713 { 00:08:29.713 "name": "BaseBdev2", 00:08:29.713 "aliases": [ 00:08:29.713 "14a294bf-55e1-4df3-b3df-d33a7919570a" 00:08:29.713 ], 00:08:29.713 "product_name": "Malloc disk", 00:08:29.713 "block_size": 512, 00:08:29.713 "num_blocks": 65536, 00:08:29.713 "uuid": "14a294bf-55e1-4df3-b3df-d33a7919570a", 00:08:29.713 "assigned_rate_limits": { 00:08:29.713 "rw_ios_per_sec": 0, 00:08:29.713 "rw_mbytes_per_sec": 0, 00:08:29.713 "r_mbytes_per_sec": 0, 00:08:29.713 "w_mbytes_per_sec": 0 00:08:29.713 }, 00:08:29.713 "claimed": true, 00:08:29.713 "claim_type": "exclusive_write", 00:08:29.713 "zoned": false, 00:08:29.713 "supported_io_types": { 00:08:29.713 "read": true, 00:08:29.713 "write": true, 00:08:29.713 "unmap": true, 00:08:29.713 "flush": true, 00:08:29.713 "reset": true, 00:08:29.713 "nvme_admin": false, 00:08:29.713 "nvme_io": false, 00:08:29.713 "nvme_io_md": false, 00:08:29.713 "write_zeroes": true, 00:08:29.713 "zcopy": true, 00:08:29.713 "get_zone_info": false, 00:08:29.713 "zone_management": false, 00:08:29.713 "zone_append": false, 00:08:29.713 "compare": false, 00:08:29.713 "compare_and_write": false, 00:08:29.713 "abort": true, 00:08:29.713 "seek_hole": false, 00:08:29.713 "seek_data": false, 00:08:29.713 "copy": true, 00:08:29.713 "nvme_iov_md": false 00:08:29.713 }, 00:08:29.713 "memory_domains": [ 00:08:29.713 { 00:08:29.713 "dma_device_id": "system", 00:08:29.713 "dma_device_type": 1 00:08:29.713 }, 00:08:29.713 { 00:08:29.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.714 "dma_device_type": 2 00:08:29.714 } 00:08:29.714 ], 00:08:29.714 "driver_specific": {} 00:08:29.714 } 00:08:29.714 ] 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.714 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.973 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.973 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.973 "name": "Existed_Raid", 00:08:29.973 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:29.973 "strip_size_kb": 64, 00:08:29.973 "state": "configuring", 00:08:29.973 "raid_level": "raid0", 00:08:29.973 "superblock": true, 00:08:29.973 "num_base_bdevs": 3, 00:08:29.973 "num_base_bdevs_discovered": 2, 00:08:29.973 "num_base_bdevs_operational": 3, 00:08:29.973 "base_bdevs_list": [ 00:08:29.973 { 00:08:29.973 "name": "BaseBdev1", 00:08:29.973 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:29.973 "is_configured": true, 00:08:29.973 "data_offset": 2048, 00:08:29.973 "data_size": 63488 00:08:29.973 }, 00:08:29.973 { 00:08:29.973 "name": "BaseBdev2", 00:08:29.973 "uuid": "14a294bf-55e1-4df3-b3df-d33a7919570a", 00:08:29.973 "is_configured": true, 00:08:29.973 "data_offset": 2048, 00:08:29.973 "data_size": 63488 00:08:29.973 }, 00:08:29.973 { 00:08:29.973 "name": "BaseBdev3", 00:08:29.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.973 "is_configured": false, 00:08:29.973 "data_offset": 0, 00:08:29.973 "data_size": 0 00:08:29.973 } 00:08:29.973 ] 00:08:29.973 }' 00:08:29.973 08:46:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.973 08:46:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.233 [2024-09-28 08:46:08.146828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.233 [2024-09-28 08:46:08.147094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.233 [2024-09-28 08:46:08.147122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.233 [2024-09-28 08:46:08.147433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:30.233 BaseBdev3 00:08:30.233 [2024-09-28 08:46:08.147788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.233 [2024-09-28 08:46:08.147804] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:30.233 [2024-09-28 08:46:08.147974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.233 [ 00:08:30.233 { 00:08:30.233 "name": "BaseBdev3", 00:08:30.233 "aliases": [ 00:08:30.233 "a64419b8-a962-485d-aa74-e241db5a3b92" 00:08:30.233 ], 00:08:30.233 "product_name": "Malloc disk", 00:08:30.233 "block_size": 512, 00:08:30.233 "num_blocks": 65536, 00:08:30.233 "uuid": "a64419b8-a962-485d-aa74-e241db5a3b92", 00:08:30.233 "assigned_rate_limits": { 00:08:30.233 "rw_ios_per_sec": 0, 00:08:30.233 "rw_mbytes_per_sec": 0, 00:08:30.233 "r_mbytes_per_sec": 0, 00:08:30.233 "w_mbytes_per_sec": 0 00:08:30.233 }, 00:08:30.233 "claimed": true, 00:08:30.233 "claim_type": "exclusive_write", 00:08:30.233 "zoned": false, 00:08:30.233 "supported_io_types": { 00:08:30.233 "read": true, 00:08:30.233 "write": true, 00:08:30.233 "unmap": true, 00:08:30.233 "flush": true, 00:08:30.233 "reset": true, 00:08:30.233 "nvme_admin": false, 00:08:30.233 "nvme_io": false, 00:08:30.233 "nvme_io_md": false, 00:08:30.233 "write_zeroes": true, 00:08:30.233 "zcopy": true, 00:08:30.233 "get_zone_info": false, 00:08:30.233 "zone_management": false, 00:08:30.233 "zone_append": false, 00:08:30.233 "compare": false, 00:08:30.233 "compare_and_write": false, 00:08:30.233 "abort": true, 00:08:30.233 "seek_hole": false, 00:08:30.233 "seek_data": false, 00:08:30.233 "copy": true, 00:08:30.233 "nvme_iov_md": false 00:08:30.233 }, 00:08:30.233 "memory_domains": [ 00:08:30.233 { 00:08:30.233 "dma_device_id": "system", 00:08:30.233 "dma_device_type": 1 00:08:30.233 }, 00:08:30.233 { 00:08:30.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.233 "dma_device_type": 2 00:08:30.233 } 00:08:30.233 ], 00:08:30.233 "driver_specific": {} 00:08:30.233 } 00:08:30.233 ] 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.233 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.492 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.492 "name": "Existed_Raid", 00:08:30.492 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:30.492 "strip_size_kb": 64, 00:08:30.492 "state": "online", 00:08:30.492 "raid_level": "raid0", 00:08:30.492 "superblock": true, 00:08:30.492 "num_base_bdevs": 3, 00:08:30.492 "num_base_bdevs_discovered": 3, 00:08:30.492 "num_base_bdevs_operational": 3, 00:08:30.492 "base_bdevs_list": [ 00:08:30.492 { 00:08:30.492 "name": "BaseBdev1", 00:08:30.492 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:30.492 "is_configured": true, 00:08:30.492 "data_offset": 2048, 00:08:30.492 "data_size": 63488 00:08:30.492 }, 00:08:30.492 { 00:08:30.492 "name": "BaseBdev2", 00:08:30.492 "uuid": "14a294bf-55e1-4df3-b3df-d33a7919570a", 00:08:30.492 "is_configured": true, 00:08:30.492 "data_offset": 2048, 00:08:30.492 "data_size": 63488 00:08:30.492 }, 00:08:30.492 { 00:08:30.492 "name": "BaseBdev3", 00:08:30.492 "uuid": "a64419b8-a962-485d-aa74-e241db5a3b92", 00:08:30.492 "is_configured": true, 00:08:30.492 "data_offset": 2048, 00:08:30.492 "data_size": 63488 00:08:30.493 } 00:08:30.493 ] 00:08:30.493 }' 00:08:30.493 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.493 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.751 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.752 [2024-09-28 08:46:08.610426] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.752 "name": "Existed_Raid", 00:08:30.752 "aliases": [ 00:08:30.752 "e30d6f1d-aa0d-4019-94cf-d0d774c48356" 00:08:30.752 ], 00:08:30.752 "product_name": "Raid Volume", 00:08:30.752 "block_size": 512, 00:08:30.752 "num_blocks": 190464, 00:08:30.752 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:30.752 "assigned_rate_limits": { 00:08:30.752 "rw_ios_per_sec": 0, 00:08:30.752 "rw_mbytes_per_sec": 0, 00:08:30.752 "r_mbytes_per_sec": 0, 00:08:30.752 "w_mbytes_per_sec": 0 00:08:30.752 }, 00:08:30.752 "claimed": false, 00:08:30.752 "zoned": false, 00:08:30.752 "supported_io_types": { 00:08:30.752 "read": true, 00:08:30.752 "write": true, 00:08:30.752 "unmap": true, 00:08:30.752 "flush": true, 00:08:30.752 "reset": true, 00:08:30.752 "nvme_admin": false, 00:08:30.752 "nvme_io": false, 00:08:30.752 "nvme_io_md": false, 00:08:30.752 "write_zeroes": true, 00:08:30.752 "zcopy": false, 00:08:30.752 "get_zone_info": false, 00:08:30.752 "zone_management": false, 00:08:30.752 "zone_append": false, 00:08:30.752 "compare": false, 00:08:30.752 "compare_and_write": false, 00:08:30.752 "abort": false, 00:08:30.752 "seek_hole": false, 00:08:30.752 "seek_data": false, 00:08:30.752 "copy": false, 00:08:30.752 "nvme_iov_md": false 00:08:30.752 }, 00:08:30.752 "memory_domains": [ 00:08:30.752 { 00:08:30.752 "dma_device_id": "system", 00:08:30.752 "dma_device_type": 1 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.752 "dma_device_type": 2 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "dma_device_id": "system", 00:08:30.752 "dma_device_type": 1 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.752 "dma_device_type": 2 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "dma_device_id": "system", 00:08:30.752 "dma_device_type": 1 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.752 "dma_device_type": 2 00:08:30.752 } 00:08:30.752 ], 00:08:30.752 "driver_specific": { 00:08:30.752 "raid": { 00:08:30.752 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:30.752 "strip_size_kb": 64, 00:08:30.752 "state": "online", 00:08:30.752 "raid_level": "raid0", 00:08:30.752 "superblock": true, 00:08:30.752 "num_base_bdevs": 3, 00:08:30.752 "num_base_bdevs_discovered": 3, 00:08:30.752 "num_base_bdevs_operational": 3, 00:08:30.752 "base_bdevs_list": [ 00:08:30.752 { 00:08:30.752 "name": "BaseBdev1", 00:08:30.752 "uuid": "f0cba19b-71fb-4a3c-b598-4d357e42c6ac", 00:08:30.752 "is_configured": true, 00:08:30.752 "data_offset": 2048, 00:08:30.752 "data_size": 63488 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "name": "BaseBdev2", 00:08:30.752 "uuid": "14a294bf-55e1-4df3-b3df-d33a7919570a", 00:08:30.752 "is_configured": true, 00:08:30.752 "data_offset": 2048, 00:08:30.752 "data_size": 63488 00:08:30.752 }, 00:08:30.752 { 00:08:30.752 "name": "BaseBdev3", 00:08:30.752 "uuid": "a64419b8-a962-485d-aa74-e241db5a3b92", 00:08:30.752 "is_configured": true, 00:08:30.752 "data_offset": 2048, 00:08:30.752 "data_size": 63488 00:08:30.752 } 00:08:30.752 ] 00:08:30.752 } 00:08:30.752 } 00:08:30.752 }' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.752 BaseBdev2 00:08:30.752 BaseBdev3' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.752 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.012 [2024-09-28 08:46:08.889628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.012 [2024-09-28 08:46:08.889686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.012 [2024-09-28 08:46:08.889754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.012 08:46:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.012 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.012 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.012 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.012 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.271 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.271 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.271 "name": "Existed_Raid", 00:08:31.271 "uuid": "e30d6f1d-aa0d-4019-94cf-d0d774c48356", 00:08:31.271 "strip_size_kb": 64, 00:08:31.271 "state": "offline", 00:08:31.271 "raid_level": "raid0", 00:08:31.271 "superblock": true, 00:08:31.271 "num_base_bdevs": 3, 00:08:31.271 "num_base_bdevs_discovered": 2, 00:08:31.271 "num_base_bdevs_operational": 2, 00:08:31.271 "base_bdevs_list": [ 00:08:31.271 { 00:08:31.271 "name": null, 00:08:31.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.271 "is_configured": false, 00:08:31.271 "data_offset": 0, 00:08:31.271 "data_size": 63488 00:08:31.271 }, 00:08:31.271 { 00:08:31.271 "name": "BaseBdev2", 00:08:31.271 "uuid": "14a294bf-55e1-4df3-b3df-d33a7919570a", 00:08:31.271 "is_configured": true, 00:08:31.271 "data_offset": 2048, 00:08:31.271 "data_size": 63488 00:08:31.271 }, 00:08:31.271 { 00:08:31.271 "name": "BaseBdev3", 00:08:31.271 "uuid": "a64419b8-a962-485d-aa74-e241db5a3b92", 00:08:31.271 "is_configured": true, 00:08:31.271 "data_offset": 2048, 00:08:31.271 "data_size": 63488 00:08:31.271 } 00:08:31.271 ] 00:08:31.271 }' 00:08:31.271 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.271 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.531 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.531 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.532 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.532 [2024-09-28 08:46:09.458910] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.791 [2024-09-28 08:46:09.620643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.791 [2024-09-28 08:46:09.620731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.791 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.792 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.051 BaseBdev2 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.051 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.051 [ 00:08:32.051 { 00:08:32.051 "name": "BaseBdev2", 00:08:32.051 "aliases": [ 00:08:32.051 "4bda3592-6ce3-4d9d-894a-47b6ec751484" 00:08:32.051 ], 00:08:32.051 "product_name": "Malloc disk", 00:08:32.051 "block_size": 512, 00:08:32.051 "num_blocks": 65536, 00:08:32.051 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:32.051 "assigned_rate_limits": { 00:08:32.051 "rw_ios_per_sec": 0, 00:08:32.051 "rw_mbytes_per_sec": 0, 00:08:32.051 "r_mbytes_per_sec": 0, 00:08:32.051 "w_mbytes_per_sec": 0 00:08:32.051 }, 00:08:32.051 "claimed": false, 00:08:32.051 "zoned": false, 00:08:32.051 "supported_io_types": { 00:08:32.051 "read": true, 00:08:32.051 "write": true, 00:08:32.051 "unmap": true, 00:08:32.051 "flush": true, 00:08:32.051 "reset": true, 00:08:32.051 "nvme_admin": false, 00:08:32.051 "nvme_io": false, 00:08:32.051 "nvme_io_md": false, 00:08:32.051 "write_zeroes": true, 00:08:32.051 "zcopy": true, 00:08:32.051 "get_zone_info": false, 00:08:32.051 "zone_management": false, 00:08:32.052 "zone_append": false, 00:08:32.052 "compare": false, 00:08:32.052 "compare_and_write": false, 00:08:32.052 "abort": true, 00:08:32.052 "seek_hole": false, 00:08:32.052 "seek_data": false, 00:08:32.052 "copy": true, 00:08:32.052 "nvme_iov_md": false 00:08:32.052 }, 00:08:32.052 "memory_domains": [ 00:08:32.052 { 00:08:32.052 "dma_device_id": "system", 00:08:32.052 "dma_device_type": 1 00:08:32.052 }, 00:08:32.052 { 00:08:32.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.052 "dma_device_type": 2 00:08:32.052 } 00:08:32.052 ], 00:08:32.052 "driver_specific": {} 00:08:32.052 } 00:08:32.052 ] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 BaseBdev3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 [ 00:08:32.052 { 00:08:32.052 "name": "BaseBdev3", 00:08:32.052 "aliases": [ 00:08:32.052 "aeb89142-0fba-4cee-81c0-70d6582fd5df" 00:08:32.052 ], 00:08:32.052 "product_name": "Malloc disk", 00:08:32.052 "block_size": 512, 00:08:32.052 "num_blocks": 65536, 00:08:32.052 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:32.052 "assigned_rate_limits": { 00:08:32.052 "rw_ios_per_sec": 0, 00:08:32.052 "rw_mbytes_per_sec": 0, 00:08:32.052 "r_mbytes_per_sec": 0, 00:08:32.052 "w_mbytes_per_sec": 0 00:08:32.052 }, 00:08:32.052 "claimed": false, 00:08:32.052 "zoned": false, 00:08:32.052 "supported_io_types": { 00:08:32.052 "read": true, 00:08:32.052 "write": true, 00:08:32.052 "unmap": true, 00:08:32.052 "flush": true, 00:08:32.052 "reset": true, 00:08:32.052 "nvme_admin": false, 00:08:32.052 "nvme_io": false, 00:08:32.052 "nvme_io_md": false, 00:08:32.052 "write_zeroes": true, 00:08:32.052 "zcopy": true, 00:08:32.052 "get_zone_info": false, 00:08:32.052 "zone_management": false, 00:08:32.052 "zone_append": false, 00:08:32.052 "compare": false, 00:08:32.052 "compare_and_write": false, 00:08:32.052 "abort": true, 00:08:32.052 "seek_hole": false, 00:08:32.052 "seek_data": false, 00:08:32.052 "copy": true, 00:08:32.052 "nvme_iov_md": false 00:08:32.052 }, 00:08:32.052 "memory_domains": [ 00:08:32.052 { 00:08:32.052 "dma_device_id": "system", 00:08:32.052 "dma_device_type": 1 00:08:32.052 }, 00:08:32.052 { 00:08:32.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.052 "dma_device_type": 2 00:08:32.052 } 00:08:32.052 ], 00:08:32.052 "driver_specific": {} 00:08:32.052 } 00:08:32.052 ] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 [2024-09-28 08:46:09.944696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.052 [2024-09-28 08:46:09.944741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.052 [2024-09-28 08:46:09.944762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.052 [2024-09-28 08:46:09.946822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.052 "name": "Existed_Raid", 00:08:32.052 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:32.052 "strip_size_kb": 64, 00:08:32.052 "state": "configuring", 00:08:32.052 "raid_level": "raid0", 00:08:32.052 "superblock": true, 00:08:32.052 "num_base_bdevs": 3, 00:08:32.052 "num_base_bdevs_discovered": 2, 00:08:32.052 "num_base_bdevs_operational": 3, 00:08:32.052 "base_bdevs_list": [ 00:08:32.052 { 00:08:32.052 "name": "BaseBdev1", 00:08:32.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.052 "is_configured": false, 00:08:32.052 "data_offset": 0, 00:08:32.052 "data_size": 0 00:08:32.052 }, 00:08:32.052 { 00:08:32.052 "name": "BaseBdev2", 00:08:32.052 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:32.052 "is_configured": true, 00:08:32.052 "data_offset": 2048, 00:08:32.052 "data_size": 63488 00:08:32.052 }, 00:08:32.052 { 00:08:32.052 "name": "BaseBdev3", 00:08:32.052 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:32.052 "is_configured": true, 00:08:32.052 "data_offset": 2048, 00:08:32.052 "data_size": 63488 00:08:32.052 } 00:08:32.052 ] 00:08:32.052 }' 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.052 08:46:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.622 [2024-09-28 08:46:10.431812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.622 "name": "Existed_Raid", 00:08:32.622 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:32.622 "strip_size_kb": 64, 00:08:32.622 "state": "configuring", 00:08:32.622 "raid_level": "raid0", 00:08:32.622 "superblock": true, 00:08:32.622 "num_base_bdevs": 3, 00:08:32.622 "num_base_bdevs_discovered": 1, 00:08:32.622 "num_base_bdevs_operational": 3, 00:08:32.622 "base_bdevs_list": [ 00:08:32.622 { 00:08:32.622 "name": "BaseBdev1", 00:08:32.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.622 "is_configured": false, 00:08:32.622 "data_offset": 0, 00:08:32.622 "data_size": 0 00:08:32.622 }, 00:08:32.622 { 00:08:32.622 "name": null, 00:08:32.622 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:32.622 "is_configured": false, 00:08:32.622 "data_offset": 0, 00:08:32.622 "data_size": 63488 00:08:32.622 }, 00:08:32.622 { 00:08:32.622 "name": "BaseBdev3", 00:08:32.622 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:32.622 "is_configured": true, 00:08:32.622 "data_offset": 2048, 00:08:32.622 "data_size": 63488 00:08:32.622 } 00:08:32.622 ] 00:08:32.622 }' 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.622 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.191 [2024-09-28 08:46:10.969407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.191 BaseBdev1 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.191 08:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.191 [ 00:08:33.191 { 00:08:33.191 "name": "BaseBdev1", 00:08:33.191 "aliases": [ 00:08:33.191 "9e61b322-495c-4658-949b-64b54b31f2c4" 00:08:33.191 ], 00:08:33.191 "product_name": "Malloc disk", 00:08:33.191 "block_size": 512, 00:08:33.191 "num_blocks": 65536, 00:08:33.191 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:33.191 "assigned_rate_limits": { 00:08:33.191 "rw_ios_per_sec": 0, 00:08:33.191 "rw_mbytes_per_sec": 0, 00:08:33.191 "r_mbytes_per_sec": 0, 00:08:33.191 "w_mbytes_per_sec": 0 00:08:33.191 }, 00:08:33.191 "claimed": true, 00:08:33.191 "claim_type": "exclusive_write", 00:08:33.191 "zoned": false, 00:08:33.191 "supported_io_types": { 00:08:33.191 "read": true, 00:08:33.191 "write": true, 00:08:33.191 "unmap": true, 00:08:33.191 "flush": true, 00:08:33.191 "reset": true, 00:08:33.191 "nvme_admin": false, 00:08:33.191 "nvme_io": false, 00:08:33.191 "nvme_io_md": false, 00:08:33.191 "write_zeroes": true, 00:08:33.191 "zcopy": true, 00:08:33.191 "get_zone_info": false, 00:08:33.191 "zone_management": false, 00:08:33.191 "zone_append": false, 00:08:33.191 "compare": false, 00:08:33.191 "compare_and_write": false, 00:08:33.191 "abort": true, 00:08:33.191 "seek_hole": false, 00:08:33.191 "seek_data": false, 00:08:33.191 "copy": true, 00:08:33.191 "nvme_iov_md": false 00:08:33.191 }, 00:08:33.191 "memory_domains": [ 00:08:33.191 { 00:08:33.191 "dma_device_id": "system", 00:08:33.191 "dma_device_type": 1 00:08:33.191 }, 00:08:33.191 { 00:08:33.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.191 "dma_device_type": 2 00:08:33.191 } 00:08:33.191 ], 00:08:33.192 "driver_specific": {} 00:08:33.192 } 00:08:33.192 ] 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.192 "name": "Existed_Raid", 00:08:33.192 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:33.192 "strip_size_kb": 64, 00:08:33.192 "state": "configuring", 00:08:33.192 "raid_level": "raid0", 00:08:33.192 "superblock": true, 00:08:33.192 "num_base_bdevs": 3, 00:08:33.192 "num_base_bdevs_discovered": 2, 00:08:33.192 "num_base_bdevs_operational": 3, 00:08:33.192 "base_bdevs_list": [ 00:08:33.192 { 00:08:33.192 "name": "BaseBdev1", 00:08:33.192 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:33.192 "is_configured": true, 00:08:33.192 "data_offset": 2048, 00:08:33.192 "data_size": 63488 00:08:33.192 }, 00:08:33.192 { 00:08:33.192 "name": null, 00:08:33.192 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:33.192 "is_configured": false, 00:08:33.192 "data_offset": 0, 00:08:33.192 "data_size": 63488 00:08:33.192 }, 00:08:33.192 { 00:08:33.192 "name": "BaseBdev3", 00:08:33.192 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:33.192 "is_configured": true, 00:08:33.192 "data_offset": 2048, 00:08:33.192 "data_size": 63488 00:08:33.192 } 00:08:33.192 ] 00:08:33.192 }' 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.192 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.451 [2024-09-28 08:46:11.436650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.451 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.709 "name": "Existed_Raid", 00:08:33.709 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:33.709 "strip_size_kb": 64, 00:08:33.709 "state": "configuring", 00:08:33.709 "raid_level": "raid0", 00:08:33.709 "superblock": true, 00:08:33.709 "num_base_bdevs": 3, 00:08:33.709 "num_base_bdevs_discovered": 1, 00:08:33.709 "num_base_bdevs_operational": 3, 00:08:33.709 "base_bdevs_list": [ 00:08:33.709 { 00:08:33.709 "name": "BaseBdev1", 00:08:33.709 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:33.709 "is_configured": true, 00:08:33.709 "data_offset": 2048, 00:08:33.709 "data_size": 63488 00:08:33.709 }, 00:08:33.709 { 00:08:33.709 "name": null, 00:08:33.709 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:33.709 "is_configured": false, 00:08:33.709 "data_offset": 0, 00:08:33.709 "data_size": 63488 00:08:33.709 }, 00:08:33.709 { 00:08:33.709 "name": null, 00:08:33.709 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:33.709 "is_configured": false, 00:08:33.709 "data_offset": 0, 00:08:33.709 "data_size": 63488 00:08:33.709 } 00:08:33.709 ] 00:08:33.709 }' 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.709 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.969 [2024-09-28 08:46:11.911870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.969 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.228 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.228 "name": "Existed_Raid", 00:08:34.228 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:34.228 "strip_size_kb": 64, 00:08:34.228 "state": "configuring", 00:08:34.228 "raid_level": "raid0", 00:08:34.228 "superblock": true, 00:08:34.228 "num_base_bdevs": 3, 00:08:34.228 "num_base_bdevs_discovered": 2, 00:08:34.228 "num_base_bdevs_operational": 3, 00:08:34.228 "base_bdevs_list": [ 00:08:34.228 { 00:08:34.228 "name": "BaseBdev1", 00:08:34.228 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:34.228 "is_configured": true, 00:08:34.228 "data_offset": 2048, 00:08:34.228 "data_size": 63488 00:08:34.228 }, 00:08:34.228 { 00:08:34.228 "name": null, 00:08:34.228 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:34.228 "is_configured": false, 00:08:34.228 "data_offset": 0, 00:08:34.228 "data_size": 63488 00:08:34.228 }, 00:08:34.228 { 00:08:34.228 "name": "BaseBdev3", 00:08:34.228 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:34.228 "is_configured": true, 00:08:34.228 "data_offset": 2048, 00:08:34.228 "data_size": 63488 00:08:34.228 } 00:08:34.228 ] 00:08:34.228 }' 00:08:34.228 08:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.228 08:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.488 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.488 [2024-09-28 08:46:12.403148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.747 "name": "Existed_Raid", 00:08:34.747 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:34.747 "strip_size_kb": 64, 00:08:34.747 "state": "configuring", 00:08:34.747 "raid_level": "raid0", 00:08:34.747 "superblock": true, 00:08:34.747 "num_base_bdevs": 3, 00:08:34.747 "num_base_bdevs_discovered": 1, 00:08:34.747 "num_base_bdevs_operational": 3, 00:08:34.747 "base_bdevs_list": [ 00:08:34.747 { 00:08:34.747 "name": null, 00:08:34.747 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:34.747 "is_configured": false, 00:08:34.747 "data_offset": 0, 00:08:34.747 "data_size": 63488 00:08:34.747 }, 00:08:34.747 { 00:08:34.747 "name": null, 00:08:34.747 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:34.747 "is_configured": false, 00:08:34.747 "data_offset": 0, 00:08:34.747 "data_size": 63488 00:08:34.747 }, 00:08:34.747 { 00:08:34.747 "name": "BaseBdev3", 00:08:34.747 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:34.747 "is_configured": true, 00:08:34.747 "data_offset": 2048, 00:08:34.747 "data_size": 63488 00:08:34.747 } 00:08:34.747 ] 00:08:34.747 }' 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.747 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.006 [2024-09-28 08:46:12.984237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.006 08:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.266 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.266 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.266 "name": "Existed_Raid", 00:08:35.266 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:35.266 "strip_size_kb": 64, 00:08:35.266 "state": "configuring", 00:08:35.266 "raid_level": "raid0", 00:08:35.266 "superblock": true, 00:08:35.266 "num_base_bdevs": 3, 00:08:35.266 "num_base_bdevs_discovered": 2, 00:08:35.266 "num_base_bdevs_operational": 3, 00:08:35.266 "base_bdevs_list": [ 00:08:35.266 { 00:08:35.266 "name": null, 00:08:35.266 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:35.266 "is_configured": false, 00:08:35.266 "data_offset": 0, 00:08:35.266 "data_size": 63488 00:08:35.266 }, 00:08:35.266 { 00:08:35.266 "name": "BaseBdev2", 00:08:35.266 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:35.266 "is_configured": true, 00:08:35.266 "data_offset": 2048, 00:08:35.266 "data_size": 63488 00:08:35.266 }, 00:08:35.266 { 00:08:35.266 "name": "BaseBdev3", 00:08:35.266 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:35.266 "is_configured": true, 00:08:35.266 "data_offset": 2048, 00:08:35.266 "data_size": 63488 00:08:35.266 } 00:08:35.266 ] 00:08:35.266 }' 00:08:35.266 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.266 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9e61b322-495c-4658-949b-64b54b31f2c4 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.526 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.786 [2024-09-28 08:46:13.557595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.786 [2024-09-28 08:46:13.557865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.786 [2024-09-28 08:46:13.557884] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.786 [2024-09-28 08:46:13.558183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.786 [2024-09-28 08:46:13.558351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.786 [2024-09-28 08:46:13.558366] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:35.786 NewBaseBdev 00:08:35.786 [2024-09-28 08:46:13.558503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.786 [ 00:08:35.786 { 00:08:35.786 "name": "NewBaseBdev", 00:08:35.786 "aliases": [ 00:08:35.786 "9e61b322-495c-4658-949b-64b54b31f2c4" 00:08:35.786 ], 00:08:35.786 "product_name": "Malloc disk", 00:08:35.786 "block_size": 512, 00:08:35.786 "num_blocks": 65536, 00:08:35.786 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:35.786 "assigned_rate_limits": { 00:08:35.786 "rw_ios_per_sec": 0, 00:08:35.786 "rw_mbytes_per_sec": 0, 00:08:35.786 "r_mbytes_per_sec": 0, 00:08:35.786 "w_mbytes_per_sec": 0 00:08:35.786 }, 00:08:35.786 "claimed": true, 00:08:35.786 "claim_type": "exclusive_write", 00:08:35.786 "zoned": false, 00:08:35.786 "supported_io_types": { 00:08:35.786 "read": true, 00:08:35.786 "write": true, 00:08:35.786 "unmap": true, 00:08:35.786 "flush": true, 00:08:35.786 "reset": true, 00:08:35.786 "nvme_admin": false, 00:08:35.786 "nvme_io": false, 00:08:35.786 "nvme_io_md": false, 00:08:35.786 "write_zeroes": true, 00:08:35.786 "zcopy": true, 00:08:35.786 "get_zone_info": false, 00:08:35.786 "zone_management": false, 00:08:35.786 "zone_append": false, 00:08:35.786 "compare": false, 00:08:35.786 "compare_and_write": false, 00:08:35.786 "abort": true, 00:08:35.786 "seek_hole": false, 00:08:35.786 "seek_data": false, 00:08:35.786 "copy": true, 00:08:35.786 "nvme_iov_md": false 00:08:35.786 }, 00:08:35.786 "memory_domains": [ 00:08:35.786 { 00:08:35.786 "dma_device_id": "system", 00:08:35.786 "dma_device_type": 1 00:08:35.786 }, 00:08:35.786 { 00:08:35.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.786 "dma_device_type": 2 00:08:35.786 } 00:08:35.786 ], 00:08:35.786 "driver_specific": {} 00:08:35.786 } 00:08:35.786 ] 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.786 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.787 "name": "Existed_Raid", 00:08:35.787 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:35.787 "strip_size_kb": 64, 00:08:35.787 "state": "online", 00:08:35.787 "raid_level": "raid0", 00:08:35.787 "superblock": true, 00:08:35.787 "num_base_bdevs": 3, 00:08:35.787 "num_base_bdevs_discovered": 3, 00:08:35.787 "num_base_bdevs_operational": 3, 00:08:35.787 "base_bdevs_list": [ 00:08:35.787 { 00:08:35.787 "name": "NewBaseBdev", 00:08:35.787 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:35.787 "is_configured": true, 00:08:35.787 "data_offset": 2048, 00:08:35.787 "data_size": 63488 00:08:35.787 }, 00:08:35.787 { 00:08:35.787 "name": "BaseBdev2", 00:08:35.787 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:35.787 "is_configured": true, 00:08:35.787 "data_offset": 2048, 00:08:35.787 "data_size": 63488 00:08:35.787 }, 00:08:35.787 { 00:08:35.787 "name": "BaseBdev3", 00:08:35.787 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:35.787 "is_configured": true, 00:08:35.787 "data_offset": 2048, 00:08:35.787 "data_size": 63488 00:08:35.787 } 00:08:35.787 ] 00:08:35.787 }' 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.787 08:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.046 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.046 [2024-09-28 08:46:14.029109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.306 "name": "Existed_Raid", 00:08:36.306 "aliases": [ 00:08:36.306 "ac08a051-5e3a-4548-aaf1-9dd95f556061" 00:08:36.306 ], 00:08:36.306 "product_name": "Raid Volume", 00:08:36.306 "block_size": 512, 00:08:36.306 "num_blocks": 190464, 00:08:36.306 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:36.306 "assigned_rate_limits": { 00:08:36.306 "rw_ios_per_sec": 0, 00:08:36.306 "rw_mbytes_per_sec": 0, 00:08:36.306 "r_mbytes_per_sec": 0, 00:08:36.306 "w_mbytes_per_sec": 0 00:08:36.306 }, 00:08:36.306 "claimed": false, 00:08:36.306 "zoned": false, 00:08:36.306 "supported_io_types": { 00:08:36.306 "read": true, 00:08:36.306 "write": true, 00:08:36.306 "unmap": true, 00:08:36.306 "flush": true, 00:08:36.306 "reset": true, 00:08:36.306 "nvme_admin": false, 00:08:36.306 "nvme_io": false, 00:08:36.306 "nvme_io_md": false, 00:08:36.306 "write_zeroes": true, 00:08:36.306 "zcopy": false, 00:08:36.306 "get_zone_info": false, 00:08:36.306 "zone_management": false, 00:08:36.306 "zone_append": false, 00:08:36.306 "compare": false, 00:08:36.306 "compare_and_write": false, 00:08:36.306 "abort": false, 00:08:36.306 "seek_hole": false, 00:08:36.306 "seek_data": false, 00:08:36.306 "copy": false, 00:08:36.306 "nvme_iov_md": false 00:08:36.306 }, 00:08:36.306 "memory_domains": [ 00:08:36.306 { 00:08:36.306 "dma_device_id": "system", 00:08:36.306 "dma_device_type": 1 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.306 "dma_device_type": 2 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "dma_device_id": "system", 00:08:36.306 "dma_device_type": 1 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.306 "dma_device_type": 2 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "dma_device_id": "system", 00:08:36.306 "dma_device_type": 1 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.306 "dma_device_type": 2 00:08:36.306 } 00:08:36.306 ], 00:08:36.306 "driver_specific": { 00:08:36.306 "raid": { 00:08:36.306 "uuid": "ac08a051-5e3a-4548-aaf1-9dd95f556061", 00:08:36.306 "strip_size_kb": 64, 00:08:36.306 "state": "online", 00:08:36.306 "raid_level": "raid0", 00:08:36.306 "superblock": true, 00:08:36.306 "num_base_bdevs": 3, 00:08:36.306 "num_base_bdevs_discovered": 3, 00:08:36.306 "num_base_bdevs_operational": 3, 00:08:36.306 "base_bdevs_list": [ 00:08:36.306 { 00:08:36.306 "name": "NewBaseBdev", 00:08:36.306 "uuid": "9e61b322-495c-4658-949b-64b54b31f2c4", 00:08:36.306 "is_configured": true, 00:08:36.306 "data_offset": 2048, 00:08:36.306 "data_size": 63488 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "name": "BaseBdev2", 00:08:36.306 "uuid": "4bda3592-6ce3-4d9d-894a-47b6ec751484", 00:08:36.306 "is_configured": true, 00:08:36.306 "data_offset": 2048, 00:08:36.306 "data_size": 63488 00:08:36.306 }, 00:08:36.306 { 00:08:36.306 "name": "BaseBdev3", 00:08:36.306 "uuid": "aeb89142-0fba-4cee-81c0-70d6582fd5df", 00:08:36.306 "is_configured": true, 00:08:36.306 "data_offset": 2048, 00:08:36.306 "data_size": 63488 00:08:36.306 } 00:08:36.306 ] 00:08:36.306 } 00:08:36.306 } 00:08:36.306 }' 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.306 BaseBdev2 00:08:36.306 BaseBdev3' 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.306 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.307 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.567 [2024-09-28 08:46:14.304316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.567 [2024-09-28 08:46:14.304344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.567 [2024-09-28 08:46:14.304425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.567 [2024-09-28 08:46:14.304484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.567 [2024-09-28 08:46:14.304498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64442 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64442 ']' 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64442 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64442 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64442' 00:08:36.567 killing process with pid 64442 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64442 00:08:36.567 [2024-09-28 08:46:14.354394] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.567 08:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64442 00:08:36.826 [2024-09-28 08:46:14.673714] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.206 08:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:38.206 00:08:38.206 real 0m10.769s 00:08:38.206 user 0m16.787s 00:08:38.206 sys 0m1.954s 00:08:38.206 ************************************ 00:08:38.206 END TEST raid_state_function_test_sb 00:08:38.206 ************************************ 00:08:38.206 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.206 08:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.206 08:46:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:38.207 08:46:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:38.207 08:46:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.207 08:46:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.207 ************************************ 00:08:38.207 START TEST raid_superblock_test 00:08:38.207 ************************************ 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65068 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65068 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65068 ']' 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.207 08:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.207 [2024-09-28 08:46:16.195622] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:38.207 [2024-09-28 08:46:16.195784] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65068 ] 00:08:38.467 [2024-09-28 08:46:16.365694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.727 [2024-09-28 08:46:16.609158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.986 [2024-09-28 08:46:16.841602] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.986 [2024-09-28 08:46:16.841636] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 malloc1 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 [2024-09-28 08:46:17.081757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:39.247 [2024-09-28 08:46:17.081879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.247 [2024-09-28 08:46:17.081925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:39.247 [2024-09-28 08:46:17.081961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.247 [2024-09-28 08:46:17.084285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.247 [2024-09-28 08:46:17.084357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:39.247 pt1 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 malloc2 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 [2024-09-28 08:46:17.174385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.247 [2024-09-28 08:46:17.174458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.247 [2024-09-28 08:46:17.174483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:39.247 [2024-09-28 08:46:17.174493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.247 [2024-09-28 08:46:17.176898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.247 [2024-09-28 08:46:17.176934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.247 pt2 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 malloc3 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.247 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.247 [2024-09-28 08:46:17.234633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:39.247 [2024-09-28 08:46:17.234702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.247 [2024-09-28 08:46:17.234724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:39.247 [2024-09-28 08:46:17.234734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.247 [2024-09-28 08:46:17.237108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.247 [2024-09-28 08:46:17.237144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:39.539 pt3 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.539 [2024-09-28 08:46:17.246715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.539 [2024-09-28 08:46:17.248849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.539 [2024-09-28 08:46:17.248931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:39.539 [2024-09-28 08:46:17.249087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:39.539 [2024-09-28 08:46:17.249102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.539 [2024-09-28 08:46:17.249323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:39.539 [2024-09-28 08:46:17.249492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:39.539 [2024-09-28 08:46:17.249506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:39.539 [2024-09-28 08:46:17.249683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.539 "name": "raid_bdev1", 00:08:39.539 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:39.539 "strip_size_kb": 64, 00:08:39.539 "state": "online", 00:08:39.539 "raid_level": "raid0", 00:08:39.539 "superblock": true, 00:08:39.539 "num_base_bdevs": 3, 00:08:39.539 "num_base_bdevs_discovered": 3, 00:08:39.539 "num_base_bdevs_operational": 3, 00:08:39.539 "base_bdevs_list": [ 00:08:39.539 { 00:08:39.539 "name": "pt1", 00:08:39.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.539 "is_configured": true, 00:08:39.539 "data_offset": 2048, 00:08:39.539 "data_size": 63488 00:08:39.539 }, 00:08:39.539 { 00:08:39.539 "name": "pt2", 00:08:39.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.539 "is_configured": true, 00:08:39.539 "data_offset": 2048, 00:08:39.539 "data_size": 63488 00:08:39.539 }, 00:08:39.539 { 00:08:39.539 "name": "pt3", 00:08:39.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:39.539 "is_configured": true, 00:08:39.539 "data_offset": 2048, 00:08:39.539 "data_size": 63488 00:08:39.539 } 00:08:39.539 ] 00:08:39.539 }' 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.539 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.816 [2024-09-28 08:46:17.694222] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.816 "name": "raid_bdev1", 00:08:39.816 "aliases": [ 00:08:39.816 "f1f47e51-72dd-4670-9263-a5b67cbbdbed" 00:08:39.816 ], 00:08:39.816 "product_name": "Raid Volume", 00:08:39.816 "block_size": 512, 00:08:39.816 "num_blocks": 190464, 00:08:39.816 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:39.816 "assigned_rate_limits": { 00:08:39.816 "rw_ios_per_sec": 0, 00:08:39.816 "rw_mbytes_per_sec": 0, 00:08:39.816 "r_mbytes_per_sec": 0, 00:08:39.816 "w_mbytes_per_sec": 0 00:08:39.816 }, 00:08:39.816 "claimed": false, 00:08:39.816 "zoned": false, 00:08:39.816 "supported_io_types": { 00:08:39.816 "read": true, 00:08:39.816 "write": true, 00:08:39.816 "unmap": true, 00:08:39.816 "flush": true, 00:08:39.816 "reset": true, 00:08:39.816 "nvme_admin": false, 00:08:39.816 "nvme_io": false, 00:08:39.816 "nvme_io_md": false, 00:08:39.816 "write_zeroes": true, 00:08:39.816 "zcopy": false, 00:08:39.816 "get_zone_info": false, 00:08:39.816 "zone_management": false, 00:08:39.816 "zone_append": false, 00:08:39.816 "compare": false, 00:08:39.816 "compare_and_write": false, 00:08:39.816 "abort": false, 00:08:39.816 "seek_hole": false, 00:08:39.816 "seek_data": false, 00:08:39.816 "copy": false, 00:08:39.816 "nvme_iov_md": false 00:08:39.816 }, 00:08:39.816 "memory_domains": [ 00:08:39.816 { 00:08:39.816 "dma_device_id": "system", 00:08:39.816 "dma_device_type": 1 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.816 "dma_device_type": 2 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "dma_device_id": "system", 00:08:39.816 "dma_device_type": 1 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.816 "dma_device_type": 2 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "dma_device_id": "system", 00:08:39.816 "dma_device_type": 1 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.816 "dma_device_type": 2 00:08:39.816 } 00:08:39.816 ], 00:08:39.816 "driver_specific": { 00:08:39.816 "raid": { 00:08:39.816 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:39.816 "strip_size_kb": 64, 00:08:39.816 "state": "online", 00:08:39.816 "raid_level": "raid0", 00:08:39.816 "superblock": true, 00:08:39.816 "num_base_bdevs": 3, 00:08:39.816 "num_base_bdevs_discovered": 3, 00:08:39.816 "num_base_bdevs_operational": 3, 00:08:39.816 "base_bdevs_list": [ 00:08:39.816 { 00:08:39.816 "name": "pt1", 00:08:39.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.816 "is_configured": true, 00:08:39.816 "data_offset": 2048, 00:08:39.816 "data_size": 63488 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "name": "pt2", 00:08:39.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.816 "is_configured": true, 00:08:39.816 "data_offset": 2048, 00:08:39.816 "data_size": 63488 00:08:39.816 }, 00:08:39.816 { 00:08:39.816 "name": "pt3", 00:08:39.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:39.816 "is_configured": true, 00:08:39.816 "data_offset": 2048, 00:08:39.816 "data_size": 63488 00:08:39.816 } 00:08:39.816 ] 00:08:39.816 } 00:08:39.816 } 00:08:39.816 }' 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:39.816 pt2 00:08:39.816 pt3' 00:08:39.816 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 [2024-09-28 08:46:17.969673] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.082 08:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f1f47e51-72dd-4670-9263-a5b67cbbdbed 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f1f47e51-72dd-4670-9263-a5b67cbbdbed ']' 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 [2024-09-28 08:46:18.017289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.082 [2024-09-28 08:46:18.017324] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.082 [2024-09-28 08:46:18.017404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.082 [2024-09-28 08:46:18.017472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.082 [2024-09-28 08:46:18.017492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:40.082 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:40.083 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:40.083 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:40.083 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.083 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 [2024-09-28 08:46:18.161075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:40.343 [2024-09-28 08:46:18.163189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:40.343 [2024-09-28 08:46:18.163250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:40.343 [2024-09-28 08:46:18.163304] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:40.343 [2024-09-28 08:46:18.163350] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:40.343 [2024-09-28 08:46:18.163372] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:40.343 [2024-09-28 08:46:18.163389] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.343 [2024-09-28 08:46:18.163398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:40.343 request: 00:08:40.343 { 00:08:40.343 "name": "raid_bdev1", 00:08:40.343 "raid_level": "raid0", 00:08:40.343 "base_bdevs": [ 00:08:40.343 "malloc1", 00:08:40.343 "malloc2", 00:08:40.343 "malloc3" 00:08:40.343 ], 00:08:40.343 "strip_size_kb": 64, 00:08:40.343 "superblock": false, 00:08:40.343 "method": "bdev_raid_create", 00:08:40.343 "req_id": 1 00:08:40.343 } 00:08:40.343 Got JSON-RPC error response 00:08:40.343 response: 00:08:40.343 { 00:08:40.343 "code": -17, 00:08:40.343 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:40.343 } 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 [2024-09-28 08:46:18.212940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.343 [2024-09-28 08:46:18.212987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.343 [2024-09-28 08:46:18.213006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:40.343 [2024-09-28 08:46:18.213016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.343 [2024-09-28 08:46:18.215414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.343 [2024-09-28 08:46:18.215448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.343 [2024-09-28 08:46:18.215521] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:40.343 [2024-09-28 08:46:18.215573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.343 pt1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.343 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.343 "name": "raid_bdev1", 00:08:40.343 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:40.343 "strip_size_kb": 64, 00:08:40.343 "state": "configuring", 00:08:40.343 "raid_level": "raid0", 00:08:40.343 "superblock": true, 00:08:40.343 "num_base_bdevs": 3, 00:08:40.343 "num_base_bdevs_discovered": 1, 00:08:40.343 "num_base_bdevs_operational": 3, 00:08:40.343 "base_bdevs_list": [ 00:08:40.344 { 00:08:40.344 "name": "pt1", 00:08:40.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.344 "is_configured": true, 00:08:40.344 "data_offset": 2048, 00:08:40.344 "data_size": 63488 00:08:40.344 }, 00:08:40.344 { 00:08:40.344 "name": null, 00:08:40.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.344 "is_configured": false, 00:08:40.344 "data_offset": 2048, 00:08:40.344 "data_size": 63488 00:08:40.344 }, 00:08:40.344 { 00:08:40.344 "name": null, 00:08:40.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.344 "is_configured": false, 00:08:40.344 "data_offset": 2048, 00:08:40.344 "data_size": 63488 00:08:40.344 } 00:08:40.344 ] 00:08:40.344 }' 00:08:40.344 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.344 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.911 [2024-09-28 08:46:18.632228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.911 [2024-09-28 08:46:18.632293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.911 [2024-09-28 08:46:18.632318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:40.911 [2024-09-28 08:46:18.632328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.911 [2024-09-28 08:46:18.632812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.911 [2024-09-28 08:46:18.632837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.911 [2024-09-28 08:46:18.632924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:40.911 [2024-09-28 08:46:18.632951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.911 pt2 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.911 [2024-09-28 08:46:18.640233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.911 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.911 "name": "raid_bdev1", 00:08:40.911 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:40.911 "strip_size_kb": 64, 00:08:40.912 "state": "configuring", 00:08:40.912 "raid_level": "raid0", 00:08:40.912 "superblock": true, 00:08:40.912 "num_base_bdevs": 3, 00:08:40.912 "num_base_bdevs_discovered": 1, 00:08:40.912 "num_base_bdevs_operational": 3, 00:08:40.912 "base_bdevs_list": [ 00:08:40.912 { 00:08:40.912 "name": "pt1", 00:08:40.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.912 "is_configured": true, 00:08:40.912 "data_offset": 2048, 00:08:40.912 "data_size": 63488 00:08:40.912 }, 00:08:40.912 { 00:08:40.912 "name": null, 00:08:40.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.912 "is_configured": false, 00:08:40.912 "data_offset": 0, 00:08:40.912 "data_size": 63488 00:08:40.912 }, 00:08:40.912 { 00:08:40.912 "name": null, 00:08:40.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.912 "is_configured": false, 00:08:40.912 "data_offset": 2048, 00:08:40.912 "data_size": 63488 00:08:40.912 } 00:08:40.912 ] 00:08:40.912 }' 00:08:40.912 08:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.912 08:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.171 [2024-09-28 08:46:19.087447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.171 [2024-09-28 08:46:19.087516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.171 [2024-09-28 08:46:19.087539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:41.171 [2024-09-28 08:46:19.087554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.171 [2024-09-28 08:46:19.088049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.171 [2024-09-28 08:46:19.088078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.171 [2024-09-28 08:46:19.088166] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.171 [2024-09-28 08:46:19.088214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.171 pt2 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.171 [2024-09-28 08:46:19.095434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:41.171 [2024-09-28 08:46:19.095499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.171 [2024-09-28 08:46:19.095512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:41.171 [2024-09-28 08:46:19.095527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.171 [2024-09-28 08:46:19.095914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.171 [2024-09-28 08:46:19.095943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:41.171 [2024-09-28 08:46:19.096008] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:41.171 [2024-09-28 08:46:19.096034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:41.171 [2024-09-28 08:46:19.096153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.171 [2024-09-28 08:46:19.096170] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:41.171 [2024-09-28 08:46:19.096465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.171 [2024-09-28 08:46:19.096636] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.171 [2024-09-28 08:46:19.096658] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:41.171 [2024-09-28 08:46:19.096795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.171 pt3 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.171 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.172 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.172 "name": "raid_bdev1", 00:08:41.172 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:41.172 "strip_size_kb": 64, 00:08:41.172 "state": "online", 00:08:41.172 "raid_level": "raid0", 00:08:41.172 "superblock": true, 00:08:41.172 "num_base_bdevs": 3, 00:08:41.172 "num_base_bdevs_discovered": 3, 00:08:41.172 "num_base_bdevs_operational": 3, 00:08:41.172 "base_bdevs_list": [ 00:08:41.172 { 00:08:41.172 "name": "pt1", 00:08:41.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.172 "is_configured": true, 00:08:41.172 "data_offset": 2048, 00:08:41.172 "data_size": 63488 00:08:41.172 }, 00:08:41.172 { 00:08:41.172 "name": "pt2", 00:08:41.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.172 "is_configured": true, 00:08:41.172 "data_offset": 2048, 00:08:41.172 "data_size": 63488 00:08:41.172 }, 00:08:41.172 { 00:08:41.172 "name": "pt3", 00:08:41.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.172 "is_configured": true, 00:08:41.172 "data_offset": 2048, 00:08:41.172 "data_size": 63488 00:08:41.172 } 00:08:41.172 ] 00:08:41.172 }' 00:08:41.172 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.172 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.740 [2024-09-28 08:46:19.538951] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.740 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.740 "name": "raid_bdev1", 00:08:41.740 "aliases": [ 00:08:41.740 "f1f47e51-72dd-4670-9263-a5b67cbbdbed" 00:08:41.740 ], 00:08:41.740 "product_name": "Raid Volume", 00:08:41.740 "block_size": 512, 00:08:41.740 "num_blocks": 190464, 00:08:41.740 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:41.740 "assigned_rate_limits": { 00:08:41.740 "rw_ios_per_sec": 0, 00:08:41.741 "rw_mbytes_per_sec": 0, 00:08:41.741 "r_mbytes_per_sec": 0, 00:08:41.741 "w_mbytes_per_sec": 0 00:08:41.741 }, 00:08:41.741 "claimed": false, 00:08:41.741 "zoned": false, 00:08:41.741 "supported_io_types": { 00:08:41.741 "read": true, 00:08:41.741 "write": true, 00:08:41.741 "unmap": true, 00:08:41.741 "flush": true, 00:08:41.741 "reset": true, 00:08:41.741 "nvme_admin": false, 00:08:41.741 "nvme_io": false, 00:08:41.741 "nvme_io_md": false, 00:08:41.741 "write_zeroes": true, 00:08:41.741 "zcopy": false, 00:08:41.741 "get_zone_info": false, 00:08:41.741 "zone_management": false, 00:08:41.741 "zone_append": false, 00:08:41.741 "compare": false, 00:08:41.741 "compare_and_write": false, 00:08:41.741 "abort": false, 00:08:41.741 "seek_hole": false, 00:08:41.741 "seek_data": false, 00:08:41.741 "copy": false, 00:08:41.741 "nvme_iov_md": false 00:08:41.741 }, 00:08:41.741 "memory_domains": [ 00:08:41.741 { 00:08:41.741 "dma_device_id": "system", 00:08:41.741 "dma_device_type": 1 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.741 "dma_device_type": 2 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "dma_device_id": "system", 00:08:41.741 "dma_device_type": 1 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.741 "dma_device_type": 2 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "dma_device_id": "system", 00:08:41.741 "dma_device_type": 1 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.741 "dma_device_type": 2 00:08:41.741 } 00:08:41.741 ], 00:08:41.741 "driver_specific": { 00:08:41.741 "raid": { 00:08:41.741 "uuid": "f1f47e51-72dd-4670-9263-a5b67cbbdbed", 00:08:41.741 "strip_size_kb": 64, 00:08:41.741 "state": "online", 00:08:41.741 "raid_level": "raid0", 00:08:41.741 "superblock": true, 00:08:41.741 "num_base_bdevs": 3, 00:08:41.741 "num_base_bdevs_discovered": 3, 00:08:41.741 "num_base_bdevs_operational": 3, 00:08:41.741 "base_bdevs_list": [ 00:08:41.741 { 00:08:41.741 "name": "pt1", 00:08:41.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.741 "is_configured": true, 00:08:41.741 "data_offset": 2048, 00:08:41.741 "data_size": 63488 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "name": "pt2", 00:08:41.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.741 "is_configured": true, 00:08:41.741 "data_offset": 2048, 00:08:41.741 "data_size": 63488 00:08:41.741 }, 00:08:41.741 { 00:08:41.741 "name": "pt3", 00:08:41.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.741 "is_configured": true, 00:08:41.741 "data_offset": 2048, 00:08:41.741 "data_size": 63488 00:08:41.741 } 00:08:41.741 ] 00:08:41.741 } 00:08:41.741 } 00:08:41.741 }' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.741 pt2 00:08:41.741 pt3' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.741 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:42.001 [2024-09-28 08:46:19.806417] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f1f47e51-72dd-4670-9263-a5b67cbbdbed '!=' f1f47e51-72dd-4670-9263-a5b67cbbdbed ']' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65068 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65068 ']' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65068 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65068 00:08:42.001 killing process with pid 65068 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65068' 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65068 00:08:42.001 [2024-09-28 08:46:19.890577] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.001 [2024-09-28 08:46:19.890684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.001 [2024-09-28 08:46:19.890747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.001 [2024-09-28 08:46:19.890762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:42.001 08:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65068 00:08:42.260 [2024-09-28 08:46:20.211811] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.643 ************************************ 00:08:43.643 END TEST raid_superblock_test 00:08:43.643 ************************************ 00:08:43.643 08:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:43.643 00:08:43.643 real 0m5.464s 00:08:43.643 user 0m7.578s 00:08:43.643 sys 0m1.001s 00:08:43.643 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.643 08:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.643 08:46:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:43.643 08:46:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.643 08:46:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.643 08:46:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.903 ************************************ 00:08:43.903 START TEST raid_read_error_test 00:08:43.903 ************************************ 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ecryvDaXri 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65321 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65321 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65321 ']' 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.903 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.904 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.904 08:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.904 [2024-09-28 08:46:21.751367] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:43.904 [2024-09-28 08:46:21.751496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65321 ] 00:08:44.163 [2024-09-28 08:46:21.920830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.423 [2024-09-28 08:46:22.175252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.423 [2024-09-28 08:46:22.409284] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.423 [2024-09-28 08:46:22.409323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.683 BaseBdev1_malloc 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.683 true 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.683 [2024-09-28 08:46:22.653256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.683 [2024-09-28 08:46:22.653330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.683 [2024-09-28 08:46:22.653351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.683 [2024-09-28 08:46:22.653364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.683 [2024-09-28 08:46:22.655736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.683 [2024-09-28 08:46:22.655774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.683 BaseBdev1 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.683 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.943 BaseBdev2_malloc 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.943 true 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.943 [2024-09-28 08:46:22.752750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.943 [2024-09-28 08:46:22.752805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.943 [2024-09-28 08:46:22.752822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.943 [2024-09-28 08:46:22.752834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.943 [2024-09-28 08:46:22.755182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.943 [2024-09-28 08:46:22.755221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.943 BaseBdev2 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.943 BaseBdev3_malloc 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.943 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.944 true 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.944 [2024-09-28 08:46:22.823554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:44.944 [2024-09-28 08:46:22.823605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.944 [2024-09-28 08:46:22.823624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:44.944 [2024-09-28 08:46:22.823635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.944 [2024-09-28 08:46:22.825959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.944 [2024-09-28 08:46:22.825995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:44.944 BaseBdev3 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.944 [2024-09-28 08:46:22.835616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.944 [2024-09-28 08:46:22.837628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.944 [2024-09-28 08:46:22.837736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.944 [2024-09-28 08:46:22.837934] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:44.944 [2024-09-28 08:46:22.837954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.944 [2024-09-28 08:46:22.838201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:44.944 [2024-09-28 08:46:22.838362] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:44.944 [2024-09-28 08:46:22.838378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:44.944 [2024-09-28 08:46:22.838532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.944 "name": "raid_bdev1", 00:08:44.944 "uuid": "7d7275b9-847b-4aa8-84d0-5b30f78c31a5", 00:08:44.944 "strip_size_kb": 64, 00:08:44.944 "state": "online", 00:08:44.944 "raid_level": "raid0", 00:08:44.944 "superblock": true, 00:08:44.944 "num_base_bdevs": 3, 00:08:44.944 "num_base_bdevs_discovered": 3, 00:08:44.944 "num_base_bdevs_operational": 3, 00:08:44.944 "base_bdevs_list": [ 00:08:44.944 { 00:08:44.944 "name": "BaseBdev1", 00:08:44.944 "uuid": "a07b6b94-2ec8-5f1e-aad6-a87b0e5ec783", 00:08:44.944 "is_configured": true, 00:08:44.944 "data_offset": 2048, 00:08:44.944 "data_size": 63488 00:08:44.944 }, 00:08:44.944 { 00:08:44.944 "name": "BaseBdev2", 00:08:44.944 "uuid": "0087b359-8da3-5856-81d0-31b6fcd61f21", 00:08:44.944 "is_configured": true, 00:08:44.944 "data_offset": 2048, 00:08:44.944 "data_size": 63488 00:08:44.944 }, 00:08:44.944 { 00:08:44.944 "name": "BaseBdev3", 00:08:44.944 "uuid": "c04f0a5d-f665-52ef-836d-a141b58c024a", 00:08:44.944 "is_configured": true, 00:08:44.944 "data_offset": 2048, 00:08:44.944 "data_size": 63488 00:08:44.944 } 00:08:44.944 ] 00:08:44.944 }' 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.944 08:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.513 08:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.513 08:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.513 [2024-09-28 08:46:23.360094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.453 "name": "raid_bdev1", 00:08:46.453 "uuid": "7d7275b9-847b-4aa8-84d0-5b30f78c31a5", 00:08:46.453 "strip_size_kb": 64, 00:08:46.453 "state": "online", 00:08:46.453 "raid_level": "raid0", 00:08:46.453 "superblock": true, 00:08:46.453 "num_base_bdevs": 3, 00:08:46.453 "num_base_bdevs_discovered": 3, 00:08:46.453 "num_base_bdevs_operational": 3, 00:08:46.453 "base_bdevs_list": [ 00:08:46.453 { 00:08:46.453 "name": "BaseBdev1", 00:08:46.453 "uuid": "a07b6b94-2ec8-5f1e-aad6-a87b0e5ec783", 00:08:46.453 "is_configured": true, 00:08:46.453 "data_offset": 2048, 00:08:46.453 "data_size": 63488 00:08:46.453 }, 00:08:46.453 { 00:08:46.453 "name": "BaseBdev2", 00:08:46.453 "uuid": "0087b359-8da3-5856-81d0-31b6fcd61f21", 00:08:46.453 "is_configured": true, 00:08:46.453 "data_offset": 2048, 00:08:46.453 "data_size": 63488 00:08:46.453 }, 00:08:46.453 { 00:08:46.453 "name": "BaseBdev3", 00:08:46.453 "uuid": "c04f0a5d-f665-52ef-836d-a141b58c024a", 00:08:46.453 "is_configured": true, 00:08:46.453 "data_offset": 2048, 00:08:46.453 "data_size": 63488 00:08:46.453 } 00:08:46.453 ] 00:08:46.453 }' 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.453 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.714 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.714 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.714 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.974 [2024-09-28 08:46:24.712298] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.974 [2024-09-28 08:46:24.712337] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.974 [2024-09-28 08:46:24.714985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.974 [2024-09-28 08:46:24.715040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.974 [2024-09-28 08:46:24.715082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.974 [2024-09-28 08:46:24.715091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:46.974 { 00:08:46.974 "results": [ 00:08:46.974 { 00:08:46.974 "job": "raid_bdev1", 00:08:46.974 "core_mask": "0x1", 00:08:46.974 "workload": "randrw", 00:08:46.974 "percentage": 50, 00:08:46.974 "status": "finished", 00:08:46.974 "queue_depth": 1, 00:08:46.974 "io_size": 131072, 00:08:46.974 "runtime": 1.352846, 00:08:46.974 "iops": 14343.095962142032, 00:08:46.974 "mibps": 1792.886995267754, 00:08:46.974 "io_failed": 1, 00:08:46.974 "io_timeout": 0, 00:08:46.974 "avg_latency_us": 98.11441547613555, 00:08:46.974 "min_latency_us": 25.823580786026202, 00:08:46.974 "max_latency_us": 1387.989519650655 00:08:46.974 } 00:08:46.974 ], 00:08:46.974 "core_count": 1 00:08:46.974 } 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65321 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65321 ']' 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65321 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65321 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.974 killing process with pid 65321 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65321' 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65321 00:08:46.974 [2024-09-28 08:46:24.760856] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.974 08:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65321 00:08:47.235 [2024-09-28 08:46:25.005056] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ecryvDaXri 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:48.615 00:08:48.615 real 0m4.779s 00:08:48.615 user 0m5.471s 00:08:48.615 sys 0m0.703s 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.615 08:46:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.615 ************************************ 00:08:48.615 END TEST raid_read_error_test 00:08:48.615 ************************************ 00:08:48.615 08:46:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:48.615 08:46:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:48.615 08:46:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.615 08:46:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.615 ************************************ 00:08:48.615 START TEST raid_write_error_test 00:08:48.615 ************************************ 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:48.615 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tl1J4mFv2q 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65472 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65472 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65472 ']' 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.616 08:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.616 [2024-09-28 08:46:26.596914] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:48.616 [2024-09-28 08:46:26.597026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65472 ] 00:08:48.875 [2024-09-28 08:46:26.760954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.135 [2024-09-28 08:46:27.003212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.395 [2024-09-28 08:46:27.232097] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.395 [2024-09-28 08:46:27.232134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 BaseBdev1_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 true 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 [2024-09-28 08:46:27.493433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.655 [2024-09-28 08:46:27.493490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.655 [2024-09-28 08:46:27.493507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.655 [2024-09-28 08:46:27.493519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.655 [2024-09-28 08:46:27.495933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.655 [2024-09-28 08:46:27.495970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.655 BaseBdev1 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 BaseBdev2_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 true 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 [2024-09-28 08:46:27.597229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.655 [2024-09-28 08:46:27.597285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.655 [2024-09-28 08:46:27.597301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.655 [2024-09-28 08:46:27.597313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.655 [2024-09-28 08:46:27.599698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.655 [2024-09-28 08:46:27.599739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.655 BaseBdev2 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.655 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.915 BaseBdev3_malloc 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.915 true 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.915 [2024-09-28 08:46:27.670636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:49.915 [2024-09-28 08:46:27.670717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.915 [2024-09-28 08:46:27.670734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:49.915 [2024-09-28 08:46:27.670745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.915 [2024-09-28 08:46:27.673062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.915 [2024-09-28 08:46:27.673098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:49.915 BaseBdev3 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.915 [2024-09-28 08:46:27.682706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.915 [2024-09-28 08:46:27.684758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.915 [2024-09-28 08:46:27.684852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.915 [2024-09-28 08:46:27.685040] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:49.915 [2024-09-28 08:46:27.685066] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.915 [2024-09-28 08:46:27.685324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.915 [2024-09-28 08:46:27.685477] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:49.915 [2024-09-28 08:46:27.685493] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:49.915 [2024-09-28 08:46:27.685645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.915 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.915 "name": "raid_bdev1", 00:08:49.915 "uuid": "f5be6d6b-fd8b-4719-a7ec-aa5e03553c24", 00:08:49.915 "strip_size_kb": 64, 00:08:49.915 "state": "online", 00:08:49.915 "raid_level": "raid0", 00:08:49.916 "superblock": true, 00:08:49.916 "num_base_bdevs": 3, 00:08:49.916 "num_base_bdevs_discovered": 3, 00:08:49.916 "num_base_bdevs_operational": 3, 00:08:49.916 "base_bdevs_list": [ 00:08:49.916 { 00:08:49.916 "name": "BaseBdev1", 00:08:49.916 "uuid": "cd2d29ea-2481-5ae0-9245-1471539aca98", 00:08:49.916 "is_configured": true, 00:08:49.916 "data_offset": 2048, 00:08:49.916 "data_size": 63488 00:08:49.916 }, 00:08:49.916 { 00:08:49.916 "name": "BaseBdev2", 00:08:49.916 "uuid": "0254c829-cd60-544d-a96f-98f7aff6a6be", 00:08:49.916 "is_configured": true, 00:08:49.916 "data_offset": 2048, 00:08:49.916 "data_size": 63488 00:08:49.916 }, 00:08:49.916 { 00:08:49.916 "name": "BaseBdev3", 00:08:49.916 "uuid": "d319f763-7819-5c09-94c9-58c93e2f51e2", 00:08:49.916 "is_configured": true, 00:08:49.916 "data_offset": 2048, 00:08:49.916 "data_size": 63488 00:08:49.916 } 00:08:49.916 ] 00:08:49.916 }' 00:08:49.916 08:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.916 08:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.175 08:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.175 08:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.435 [2024-09-28 08:46:28.243218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.374 "name": "raid_bdev1", 00:08:51.374 "uuid": "f5be6d6b-fd8b-4719-a7ec-aa5e03553c24", 00:08:51.374 "strip_size_kb": 64, 00:08:51.374 "state": "online", 00:08:51.374 "raid_level": "raid0", 00:08:51.374 "superblock": true, 00:08:51.374 "num_base_bdevs": 3, 00:08:51.374 "num_base_bdevs_discovered": 3, 00:08:51.374 "num_base_bdevs_operational": 3, 00:08:51.374 "base_bdevs_list": [ 00:08:51.374 { 00:08:51.374 "name": "BaseBdev1", 00:08:51.374 "uuid": "cd2d29ea-2481-5ae0-9245-1471539aca98", 00:08:51.374 "is_configured": true, 00:08:51.374 "data_offset": 2048, 00:08:51.374 "data_size": 63488 00:08:51.374 }, 00:08:51.374 { 00:08:51.374 "name": "BaseBdev2", 00:08:51.374 "uuid": "0254c829-cd60-544d-a96f-98f7aff6a6be", 00:08:51.374 "is_configured": true, 00:08:51.374 "data_offset": 2048, 00:08:51.374 "data_size": 63488 00:08:51.374 }, 00:08:51.374 { 00:08:51.374 "name": "BaseBdev3", 00:08:51.374 "uuid": "d319f763-7819-5c09-94c9-58c93e2f51e2", 00:08:51.374 "is_configured": true, 00:08:51.374 "data_offset": 2048, 00:08:51.374 "data_size": 63488 00:08:51.374 } 00:08:51.374 ] 00:08:51.374 }' 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.374 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.634 [2024-09-28 08:46:29.563640] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.634 [2024-09-28 08:46:29.563686] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.634 [2024-09-28 08:46:29.566248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.634 [2024-09-28 08:46:29.566303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.634 [2024-09-28 08:46:29.566346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.634 [2024-09-28 08:46:29.566356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:51.634 { 00:08:51.634 "results": [ 00:08:51.634 { 00:08:51.634 "job": "raid_bdev1", 00:08:51.634 "core_mask": "0x1", 00:08:51.634 "workload": "randrw", 00:08:51.634 "percentage": 50, 00:08:51.634 "status": "finished", 00:08:51.634 "queue_depth": 1, 00:08:51.634 "io_size": 131072, 00:08:51.634 "runtime": 1.320845, 00:08:51.634 "iops": 14537.663389724003, 00:08:51.634 "mibps": 1817.2079237155003, 00:08:51.634 "io_failed": 1, 00:08:51.634 "io_timeout": 0, 00:08:51.634 "avg_latency_us": 96.92066109575764, 00:08:51.634 "min_latency_us": 21.351965065502185, 00:08:51.634 "max_latency_us": 1395.1441048034935 00:08:51.634 } 00:08:51.634 ], 00:08:51.634 "core_count": 1 00:08:51.634 } 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65472 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65472 ']' 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65472 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65472 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65472' 00:08:51.634 killing process with pid 65472 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65472 00:08:51.634 [2024-09-28 08:46:29.606163] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.634 08:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65472 00:08:51.894 [2024-09-28 08:46:29.848585] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tl1J4mFv2q 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:53.276 00:08:53.276 real 0m4.756s 00:08:53.276 user 0m5.441s 00:08:53.276 sys 0m0.681s 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.276 08:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.276 ************************************ 00:08:53.276 END TEST raid_write_error_test 00:08:53.276 ************************************ 00:08:53.536 08:46:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:53.536 08:46:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:53.536 08:46:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.536 08:46:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.536 08:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 ************************************ 00:08:53.536 START TEST raid_state_function_test 00:08:53.536 ************************************ 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65615 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.536 Process raid pid: 65615 00:08:53.536 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65615' 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65615 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65615 ']' 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.537 08:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.537 [2024-09-28 08:46:31.423048] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:53.537 [2024-09-28 08:46:31.423197] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.797 [2024-09-28 08:46:31.594734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.059 [2024-09-28 08:46:31.845720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.319 [2024-09-28 08:46:32.076077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.319 [2024-09-28 08:46:32.076114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.319 [2024-09-28 08:46:32.258689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.319 [2024-09-28 08:46:32.258740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.319 [2024-09-28 08:46:32.258750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.319 [2024-09-28 08:46:32.258760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.319 [2024-09-28 08:46:32.258766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.319 [2024-09-28 08:46:32.258776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.319 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.579 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.579 "name": "Existed_Raid", 00:08:54.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.579 "strip_size_kb": 64, 00:08:54.579 "state": "configuring", 00:08:54.579 "raid_level": "concat", 00:08:54.579 "superblock": false, 00:08:54.579 "num_base_bdevs": 3, 00:08:54.579 "num_base_bdevs_discovered": 0, 00:08:54.579 "num_base_bdevs_operational": 3, 00:08:54.579 "base_bdevs_list": [ 00:08:54.579 { 00:08:54.579 "name": "BaseBdev1", 00:08:54.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.579 "is_configured": false, 00:08:54.579 "data_offset": 0, 00:08:54.579 "data_size": 0 00:08:54.579 }, 00:08:54.579 { 00:08:54.579 "name": "BaseBdev2", 00:08:54.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.579 "is_configured": false, 00:08:54.579 "data_offset": 0, 00:08:54.579 "data_size": 0 00:08:54.579 }, 00:08:54.579 { 00:08:54.579 "name": "BaseBdev3", 00:08:54.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.579 "is_configured": false, 00:08:54.579 "data_offset": 0, 00:08:54.579 "data_size": 0 00:08:54.579 } 00:08:54.579 ] 00:08:54.579 }' 00:08:54.579 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.579 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 [2024-09-28 08:46:32.641936] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.839 [2024-09-28 08:46:32.641980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 [2024-09-28 08:46:32.653942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.839 [2024-09-28 08:46:32.653989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.839 [2024-09-28 08:46:32.653998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.839 [2024-09-28 08:46:32.654008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.839 [2024-09-28 08:46:32.654014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.839 [2024-09-28 08:46:32.654024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 [2024-09-28 08:46:32.742483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.839 BaseBdev1 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.839 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.840 [ 00:08:54.840 { 00:08:54.840 "name": "BaseBdev1", 00:08:54.840 "aliases": [ 00:08:54.840 "610bca2e-7d95-4540-a1e8-b6b9017b1d34" 00:08:54.840 ], 00:08:54.840 "product_name": "Malloc disk", 00:08:54.840 "block_size": 512, 00:08:54.840 "num_blocks": 65536, 00:08:54.840 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:54.840 "assigned_rate_limits": { 00:08:54.840 "rw_ios_per_sec": 0, 00:08:54.840 "rw_mbytes_per_sec": 0, 00:08:54.840 "r_mbytes_per_sec": 0, 00:08:54.840 "w_mbytes_per_sec": 0 00:08:54.840 }, 00:08:54.840 "claimed": true, 00:08:54.840 "claim_type": "exclusive_write", 00:08:54.840 "zoned": false, 00:08:54.840 "supported_io_types": { 00:08:54.840 "read": true, 00:08:54.840 "write": true, 00:08:54.840 "unmap": true, 00:08:54.840 "flush": true, 00:08:54.840 "reset": true, 00:08:54.840 "nvme_admin": false, 00:08:54.840 "nvme_io": false, 00:08:54.840 "nvme_io_md": false, 00:08:54.840 "write_zeroes": true, 00:08:54.840 "zcopy": true, 00:08:54.840 "get_zone_info": false, 00:08:54.840 "zone_management": false, 00:08:54.840 "zone_append": false, 00:08:54.840 "compare": false, 00:08:54.840 "compare_and_write": false, 00:08:54.840 "abort": true, 00:08:54.840 "seek_hole": false, 00:08:54.840 "seek_data": false, 00:08:54.840 "copy": true, 00:08:54.840 "nvme_iov_md": false 00:08:54.840 }, 00:08:54.840 "memory_domains": [ 00:08:54.840 { 00:08:54.840 "dma_device_id": "system", 00:08:54.840 "dma_device_type": 1 00:08:54.840 }, 00:08:54.840 { 00:08:54.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.840 "dma_device_type": 2 00:08:54.840 } 00:08:54.840 ], 00:08:54.840 "driver_specific": {} 00:08:54.840 } 00:08:54.840 ] 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.840 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.102 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.102 "name": "Existed_Raid", 00:08:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.102 "strip_size_kb": 64, 00:08:55.102 "state": "configuring", 00:08:55.102 "raid_level": "concat", 00:08:55.102 "superblock": false, 00:08:55.102 "num_base_bdevs": 3, 00:08:55.102 "num_base_bdevs_discovered": 1, 00:08:55.102 "num_base_bdevs_operational": 3, 00:08:55.102 "base_bdevs_list": [ 00:08:55.102 { 00:08:55.102 "name": "BaseBdev1", 00:08:55.102 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:55.102 "is_configured": true, 00:08:55.102 "data_offset": 0, 00:08:55.102 "data_size": 65536 00:08:55.102 }, 00:08:55.102 { 00:08:55.102 "name": "BaseBdev2", 00:08:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.102 "is_configured": false, 00:08:55.102 "data_offset": 0, 00:08:55.102 "data_size": 0 00:08:55.102 }, 00:08:55.102 { 00:08:55.102 "name": "BaseBdev3", 00:08:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.102 "is_configured": false, 00:08:55.102 "data_offset": 0, 00:08:55.102 "data_size": 0 00:08:55.102 } 00:08:55.102 ] 00:08:55.102 }' 00:08:55.102 08:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.102 08:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.362 [2024-09-28 08:46:33.233715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.362 [2024-09-28 08:46:33.233777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.362 [2024-09-28 08:46:33.245743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.362 [2024-09-28 08:46:33.247913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.362 [2024-09-28 08:46:33.247958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.362 [2024-09-28 08:46:33.247969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.362 [2024-09-28 08:46:33.247978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.362 "name": "Existed_Raid", 00:08:55.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.362 "strip_size_kb": 64, 00:08:55.362 "state": "configuring", 00:08:55.362 "raid_level": "concat", 00:08:55.362 "superblock": false, 00:08:55.362 "num_base_bdevs": 3, 00:08:55.362 "num_base_bdevs_discovered": 1, 00:08:55.362 "num_base_bdevs_operational": 3, 00:08:55.362 "base_bdevs_list": [ 00:08:55.362 { 00:08:55.362 "name": "BaseBdev1", 00:08:55.362 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:55.362 "is_configured": true, 00:08:55.362 "data_offset": 0, 00:08:55.362 "data_size": 65536 00:08:55.362 }, 00:08:55.362 { 00:08:55.362 "name": "BaseBdev2", 00:08:55.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.362 "is_configured": false, 00:08:55.362 "data_offset": 0, 00:08:55.362 "data_size": 0 00:08:55.362 }, 00:08:55.362 { 00:08:55.362 "name": "BaseBdev3", 00:08:55.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.362 "is_configured": false, 00:08:55.362 "data_offset": 0, 00:08:55.362 "data_size": 0 00:08:55.362 } 00:08:55.362 ] 00:08:55.362 }' 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.362 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.932 [2024-09-28 08:46:33.756220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.932 BaseBdev2 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.932 [ 00:08:55.932 { 00:08:55.932 "name": "BaseBdev2", 00:08:55.932 "aliases": [ 00:08:55.932 "fad120f2-7705-4bab-bf45-cfe1d3145d1d" 00:08:55.932 ], 00:08:55.932 "product_name": "Malloc disk", 00:08:55.932 "block_size": 512, 00:08:55.932 "num_blocks": 65536, 00:08:55.932 "uuid": "fad120f2-7705-4bab-bf45-cfe1d3145d1d", 00:08:55.932 "assigned_rate_limits": { 00:08:55.932 "rw_ios_per_sec": 0, 00:08:55.932 "rw_mbytes_per_sec": 0, 00:08:55.932 "r_mbytes_per_sec": 0, 00:08:55.932 "w_mbytes_per_sec": 0 00:08:55.932 }, 00:08:55.932 "claimed": true, 00:08:55.932 "claim_type": "exclusive_write", 00:08:55.932 "zoned": false, 00:08:55.932 "supported_io_types": { 00:08:55.932 "read": true, 00:08:55.932 "write": true, 00:08:55.932 "unmap": true, 00:08:55.932 "flush": true, 00:08:55.932 "reset": true, 00:08:55.932 "nvme_admin": false, 00:08:55.932 "nvme_io": false, 00:08:55.932 "nvme_io_md": false, 00:08:55.932 "write_zeroes": true, 00:08:55.932 "zcopy": true, 00:08:55.932 "get_zone_info": false, 00:08:55.932 "zone_management": false, 00:08:55.932 "zone_append": false, 00:08:55.932 "compare": false, 00:08:55.932 "compare_and_write": false, 00:08:55.932 "abort": true, 00:08:55.932 "seek_hole": false, 00:08:55.932 "seek_data": false, 00:08:55.932 "copy": true, 00:08:55.932 "nvme_iov_md": false 00:08:55.932 }, 00:08:55.932 "memory_domains": [ 00:08:55.932 { 00:08:55.932 "dma_device_id": "system", 00:08:55.932 "dma_device_type": 1 00:08:55.932 }, 00:08:55.932 { 00:08:55.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.932 "dma_device_type": 2 00:08:55.932 } 00:08:55.932 ], 00:08:55.932 "driver_specific": {} 00:08:55.932 } 00:08:55.932 ] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.932 "name": "Existed_Raid", 00:08:55.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.932 "strip_size_kb": 64, 00:08:55.932 "state": "configuring", 00:08:55.932 "raid_level": "concat", 00:08:55.932 "superblock": false, 00:08:55.932 "num_base_bdevs": 3, 00:08:55.932 "num_base_bdevs_discovered": 2, 00:08:55.932 "num_base_bdevs_operational": 3, 00:08:55.932 "base_bdevs_list": [ 00:08:55.932 { 00:08:55.932 "name": "BaseBdev1", 00:08:55.932 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:55.932 "is_configured": true, 00:08:55.932 "data_offset": 0, 00:08:55.932 "data_size": 65536 00:08:55.932 }, 00:08:55.932 { 00:08:55.932 "name": "BaseBdev2", 00:08:55.932 "uuid": "fad120f2-7705-4bab-bf45-cfe1d3145d1d", 00:08:55.932 "is_configured": true, 00:08:55.932 "data_offset": 0, 00:08:55.932 "data_size": 65536 00:08:55.932 }, 00:08:55.932 { 00:08:55.932 "name": "BaseBdev3", 00:08:55.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.932 "is_configured": false, 00:08:55.932 "data_offset": 0, 00:08:55.932 "data_size": 0 00:08:55.932 } 00:08:55.932 ] 00:08:55.932 }' 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.932 08:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.500 [2024-09-28 08:46:34.239583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.500 [2024-09-28 08:46:34.239637] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.500 [2024-09-28 08:46:34.239665] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:56.500 [2024-09-28 08:46:34.240155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.500 [2024-09-28 08:46:34.240370] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.500 [2024-09-28 08:46:34.240389] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:56.500 [2024-09-28 08:46:34.240701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.500 BaseBdev3 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.500 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.500 [ 00:08:56.500 { 00:08:56.500 "name": "BaseBdev3", 00:08:56.500 "aliases": [ 00:08:56.500 "a21e254e-9051-4949-9177-d787c428b60e" 00:08:56.500 ], 00:08:56.500 "product_name": "Malloc disk", 00:08:56.500 "block_size": 512, 00:08:56.500 "num_blocks": 65536, 00:08:56.500 "uuid": "a21e254e-9051-4949-9177-d787c428b60e", 00:08:56.500 "assigned_rate_limits": { 00:08:56.500 "rw_ios_per_sec": 0, 00:08:56.500 "rw_mbytes_per_sec": 0, 00:08:56.500 "r_mbytes_per_sec": 0, 00:08:56.500 "w_mbytes_per_sec": 0 00:08:56.500 }, 00:08:56.500 "claimed": true, 00:08:56.501 "claim_type": "exclusive_write", 00:08:56.501 "zoned": false, 00:08:56.501 "supported_io_types": { 00:08:56.501 "read": true, 00:08:56.501 "write": true, 00:08:56.501 "unmap": true, 00:08:56.501 "flush": true, 00:08:56.501 "reset": true, 00:08:56.501 "nvme_admin": false, 00:08:56.501 "nvme_io": false, 00:08:56.501 "nvme_io_md": false, 00:08:56.501 "write_zeroes": true, 00:08:56.501 "zcopy": true, 00:08:56.501 "get_zone_info": false, 00:08:56.501 "zone_management": false, 00:08:56.501 "zone_append": false, 00:08:56.501 "compare": false, 00:08:56.501 "compare_and_write": false, 00:08:56.501 "abort": true, 00:08:56.501 "seek_hole": false, 00:08:56.501 "seek_data": false, 00:08:56.501 "copy": true, 00:08:56.501 "nvme_iov_md": false 00:08:56.501 }, 00:08:56.501 "memory_domains": [ 00:08:56.501 { 00:08:56.501 "dma_device_id": "system", 00:08:56.501 "dma_device_type": 1 00:08:56.501 }, 00:08:56.501 { 00:08:56.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.501 "dma_device_type": 2 00:08:56.501 } 00:08:56.501 ], 00:08:56.501 "driver_specific": {} 00:08:56.501 } 00:08:56.501 ] 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.501 "name": "Existed_Raid", 00:08:56.501 "uuid": "6cb27a05-799a-4055-8163-6ce33675fa57", 00:08:56.501 "strip_size_kb": 64, 00:08:56.501 "state": "online", 00:08:56.501 "raid_level": "concat", 00:08:56.501 "superblock": false, 00:08:56.501 "num_base_bdevs": 3, 00:08:56.501 "num_base_bdevs_discovered": 3, 00:08:56.501 "num_base_bdevs_operational": 3, 00:08:56.501 "base_bdevs_list": [ 00:08:56.501 { 00:08:56.501 "name": "BaseBdev1", 00:08:56.501 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:56.501 "is_configured": true, 00:08:56.501 "data_offset": 0, 00:08:56.501 "data_size": 65536 00:08:56.501 }, 00:08:56.501 { 00:08:56.501 "name": "BaseBdev2", 00:08:56.501 "uuid": "fad120f2-7705-4bab-bf45-cfe1d3145d1d", 00:08:56.501 "is_configured": true, 00:08:56.501 "data_offset": 0, 00:08:56.501 "data_size": 65536 00:08:56.501 }, 00:08:56.501 { 00:08:56.501 "name": "BaseBdev3", 00:08:56.501 "uuid": "a21e254e-9051-4949-9177-d787c428b60e", 00:08:56.501 "is_configured": true, 00:08:56.501 "data_offset": 0, 00:08:56.501 "data_size": 65536 00:08:56.501 } 00:08:56.501 ] 00:08:56.501 }' 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.501 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.761 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.761 [2024-09-28 08:46:34.735066] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.020 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.021 "name": "Existed_Raid", 00:08:57.021 "aliases": [ 00:08:57.021 "6cb27a05-799a-4055-8163-6ce33675fa57" 00:08:57.021 ], 00:08:57.021 "product_name": "Raid Volume", 00:08:57.021 "block_size": 512, 00:08:57.021 "num_blocks": 196608, 00:08:57.021 "uuid": "6cb27a05-799a-4055-8163-6ce33675fa57", 00:08:57.021 "assigned_rate_limits": { 00:08:57.021 "rw_ios_per_sec": 0, 00:08:57.021 "rw_mbytes_per_sec": 0, 00:08:57.021 "r_mbytes_per_sec": 0, 00:08:57.021 "w_mbytes_per_sec": 0 00:08:57.021 }, 00:08:57.021 "claimed": false, 00:08:57.021 "zoned": false, 00:08:57.021 "supported_io_types": { 00:08:57.021 "read": true, 00:08:57.021 "write": true, 00:08:57.021 "unmap": true, 00:08:57.021 "flush": true, 00:08:57.021 "reset": true, 00:08:57.021 "nvme_admin": false, 00:08:57.021 "nvme_io": false, 00:08:57.021 "nvme_io_md": false, 00:08:57.021 "write_zeroes": true, 00:08:57.021 "zcopy": false, 00:08:57.021 "get_zone_info": false, 00:08:57.021 "zone_management": false, 00:08:57.021 "zone_append": false, 00:08:57.021 "compare": false, 00:08:57.021 "compare_and_write": false, 00:08:57.021 "abort": false, 00:08:57.021 "seek_hole": false, 00:08:57.021 "seek_data": false, 00:08:57.021 "copy": false, 00:08:57.021 "nvme_iov_md": false 00:08:57.021 }, 00:08:57.021 "memory_domains": [ 00:08:57.021 { 00:08:57.021 "dma_device_id": "system", 00:08:57.021 "dma_device_type": 1 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.021 "dma_device_type": 2 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "dma_device_id": "system", 00:08:57.021 "dma_device_type": 1 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.021 "dma_device_type": 2 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "dma_device_id": "system", 00:08:57.021 "dma_device_type": 1 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.021 "dma_device_type": 2 00:08:57.021 } 00:08:57.021 ], 00:08:57.021 "driver_specific": { 00:08:57.021 "raid": { 00:08:57.021 "uuid": "6cb27a05-799a-4055-8163-6ce33675fa57", 00:08:57.021 "strip_size_kb": 64, 00:08:57.021 "state": "online", 00:08:57.021 "raid_level": "concat", 00:08:57.021 "superblock": false, 00:08:57.021 "num_base_bdevs": 3, 00:08:57.021 "num_base_bdevs_discovered": 3, 00:08:57.021 "num_base_bdevs_operational": 3, 00:08:57.021 "base_bdevs_list": [ 00:08:57.021 { 00:08:57.021 "name": "BaseBdev1", 00:08:57.021 "uuid": "610bca2e-7d95-4540-a1e8-b6b9017b1d34", 00:08:57.021 "is_configured": true, 00:08:57.021 "data_offset": 0, 00:08:57.021 "data_size": 65536 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "name": "BaseBdev2", 00:08:57.021 "uuid": "fad120f2-7705-4bab-bf45-cfe1d3145d1d", 00:08:57.021 "is_configured": true, 00:08:57.021 "data_offset": 0, 00:08:57.021 "data_size": 65536 00:08:57.021 }, 00:08:57.021 { 00:08:57.021 "name": "BaseBdev3", 00:08:57.021 "uuid": "a21e254e-9051-4949-9177-d787c428b60e", 00:08:57.021 "is_configured": true, 00:08:57.021 "data_offset": 0, 00:08:57.021 "data_size": 65536 00:08:57.021 } 00:08:57.021 ] 00:08:57.021 } 00:08:57.021 } 00:08:57.021 }' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.021 BaseBdev2 00:08:57.021 BaseBdev3' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.021 08:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.021 [2024-09-28 08:46:35.006290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.021 [2024-09-28 08:46:35.006355] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.021 [2024-09-28 08:46:35.006451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.280 "name": "Existed_Raid", 00:08:57.280 "uuid": "6cb27a05-799a-4055-8163-6ce33675fa57", 00:08:57.280 "strip_size_kb": 64, 00:08:57.280 "state": "offline", 00:08:57.280 "raid_level": "concat", 00:08:57.280 "superblock": false, 00:08:57.280 "num_base_bdevs": 3, 00:08:57.280 "num_base_bdevs_discovered": 2, 00:08:57.280 "num_base_bdevs_operational": 2, 00:08:57.280 "base_bdevs_list": [ 00:08:57.280 { 00:08:57.280 "name": null, 00:08:57.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.280 "is_configured": false, 00:08:57.280 "data_offset": 0, 00:08:57.280 "data_size": 65536 00:08:57.280 }, 00:08:57.280 { 00:08:57.280 "name": "BaseBdev2", 00:08:57.280 "uuid": "fad120f2-7705-4bab-bf45-cfe1d3145d1d", 00:08:57.280 "is_configured": true, 00:08:57.280 "data_offset": 0, 00:08:57.280 "data_size": 65536 00:08:57.280 }, 00:08:57.280 { 00:08:57.280 "name": "BaseBdev3", 00:08:57.280 "uuid": "a21e254e-9051-4949-9177-d787c428b60e", 00:08:57.280 "is_configured": true, 00:08:57.280 "data_offset": 0, 00:08:57.280 "data_size": 65536 00:08:57.280 } 00:08:57.280 ] 00:08:57.280 }' 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.280 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 [2024-09-28 08:46:35.589591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.846 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 [2024-09-28 08:46:35.750677] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.846 [2024-09-28 08:46:35.750789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.104 BaseBdev2 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.104 [ 00:08:58.104 { 00:08:58.104 "name": "BaseBdev2", 00:08:58.104 "aliases": [ 00:08:58.104 "786adcf7-7c85-42b8-b574-ca481dc557b5" 00:08:58.104 ], 00:08:58.104 "product_name": "Malloc disk", 00:08:58.104 "block_size": 512, 00:08:58.104 "num_blocks": 65536, 00:08:58.104 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:08:58.104 "assigned_rate_limits": { 00:08:58.104 "rw_ios_per_sec": 0, 00:08:58.104 "rw_mbytes_per_sec": 0, 00:08:58.104 "r_mbytes_per_sec": 0, 00:08:58.104 "w_mbytes_per_sec": 0 00:08:58.104 }, 00:08:58.104 "claimed": false, 00:08:58.104 "zoned": false, 00:08:58.104 "supported_io_types": { 00:08:58.104 "read": true, 00:08:58.104 "write": true, 00:08:58.104 "unmap": true, 00:08:58.104 "flush": true, 00:08:58.104 "reset": true, 00:08:58.104 "nvme_admin": false, 00:08:58.104 "nvme_io": false, 00:08:58.104 "nvme_io_md": false, 00:08:58.104 "write_zeroes": true, 00:08:58.104 "zcopy": true, 00:08:58.104 "get_zone_info": false, 00:08:58.104 "zone_management": false, 00:08:58.104 "zone_append": false, 00:08:58.104 "compare": false, 00:08:58.104 "compare_and_write": false, 00:08:58.104 "abort": true, 00:08:58.104 "seek_hole": false, 00:08:58.104 "seek_data": false, 00:08:58.104 "copy": true, 00:08:58.104 "nvme_iov_md": false 00:08:58.104 }, 00:08:58.104 "memory_domains": [ 00:08:58.104 { 00:08:58.104 "dma_device_id": "system", 00:08:58.104 "dma_device_type": 1 00:08:58.104 }, 00:08:58.104 { 00:08:58.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.104 "dma_device_type": 2 00:08:58.104 } 00:08:58.104 ], 00:08:58.104 "driver_specific": {} 00:08:58.104 } 00:08:58.104 ] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.104 BaseBdev3 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.104 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 [ 00:08:58.105 { 00:08:58.105 "name": "BaseBdev3", 00:08:58.105 "aliases": [ 00:08:58.105 "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f" 00:08:58.105 ], 00:08:58.105 "product_name": "Malloc disk", 00:08:58.105 "block_size": 512, 00:08:58.105 "num_blocks": 65536, 00:08:58.105 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:08:58.105 "assigned_rate_limits": { 00:08:58.105 "rw_ios_per_sec": 0, 00:08:58.105 "rw_mbytes_per_sec": 0, 00:08:58.105 "r_mbytes_per_sec": 0, 00:08:58.105 "w_mbytes_per_sec": 0 00:08:58.105 }, 00:08:58.105 "claimed": false, 00:08:58.105 "zoned": false, 00:08:58.105 "supported_io_types": { 00:08:58.105 "read": true, 00:08:58.105 "write": true, 00:08:58.105 "unmap": true, 00:08:58.105 "flush": true, 00:08:58.105 "reset": true, 00:08:58.105 "nvme_admin": false, 00:08:58.105 "nvme_io": false, 00:08:58.105 "nvme_io_md": false, 00:08:58.105 "write_zeroes": true, 00:08:58.105 "zcopy": true, 00:08:58.105 "get_zone_info": false, 00:08:58.105 "zone_management": false, 00:08:58.105 "zone_append": false, 00:08:58.105 "compare": false, 00:08:58.105 "compare_and_write": false, 00:08:58.105 "abort": true, 00:08:58.105 "seek_hole": false, 00:08:58.105 "seek_data": false, 00:08:58.105 "copy": true, 00:08:58.105 "nvme_iov_md": false 00:08:58.105 }, 00:08:58.105 "memory_domains": [ 00:08:58.105 { 00:08:58.105 "dma_device_id": "system", 00:08:58.105 "dma_device_type": 1 00:08:58.105 }, 00:08:58.105 { 00:08:58.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.105 "dma_device_type": 2 00:08:58.105 } 00:08:58.105 ], 00:08:58.105 "driver_specific": {} 00:08:58.105 } 00:08:58.105 ] 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 [2024-09-28 08:46:36.077204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.105 [2024-09-28 08:46:36.077307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.105 [2024-09-28 08:46:36.077348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.105 [2024-09-28 08:46:36.079353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.105 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.364 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.364 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.364 "name": "Existed_Raid", 00:08:58.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.364 "strip_size_kb": 64, 00:08:58.364 "state": "configuring", 00:08:58.364 "raid_level": "concat", 00:08:58.364 "superblock": false, 00:08:58.364 "num_base_bdevs": 3, 00:08:58.364 "num_base_bdevs_discovered": 2, 00:08:58.364 "num_base_bdevs_operational": 3, 00:08:58.364 "base_bdevs_list": [ 00:08:58.364 { 00:08:58.364 "name": "BaseBdev1", 00:08:58.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.364 "is_configured": false, 00:08:58.364 "data_offset": 0, 00:08:58.364 "data_size": 0 00:08:58.364 }, 00:08:58.364 { 00:08:58.364 "name": "BaseBdev2", 00:08:58.364 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:08:58.364 "is_configured": true, 00:08:58.364 "data_offset": 0, 00:08:58.364 "data_size": 65536 00:08:58.364 }, 00:08:58.364 { 00:08:58.364 "name": "BaseBdev3", 00:08:58.364 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:08:58.364 "is_configured": true, 00:08:58.364 "data_offset": 0, 00:08:58.364 "data_size": 65536 00:08:58.364 } 00:08:58.364 ] 00:08:58.364 }' 00:08:58.364 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.364 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.623 [2024-09-28 08:46:36.524434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.623 "name": "Existed_Raid", 00:08:58.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.623 "strip_size_kb": 64, 00:08:58.623 "state": "configuring", 00:08:58.623 "raid_level": "concat", 00:08:58.623 "superblock": false, 00:08:58.623 "num_base_bdevs": 3, 00:08:58.623 "num_base_bdevs_discovered": 1, 00:08:58.623 "num_base_bdevs_operational": 3, 00:08:58.623 "base_bdevs_list": [ 00:08:58.623 { 00:08:58.623 "name": "BaseBdev1", 00:08:58.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.623 "is_configured": false, 00:08:58.623 "data_offset": 0, 00:08:58.623 "data_size": 0 00:08:58.623 }, 00:08:58.623 { 00:08:58.623 "name": null, 00:08:58.623 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:08:58.623 "is_configured": false, 00:08:58.623 "data_offset": 0, 00:08:58.623 "data_size": 65536 00:08:58.623 }, 00:08:58.623 { 00:08:58.623 "name": "BaseBdev3", 00:08:58.623 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:08:58.623 "is_configured": true, 00:08:58.623 "data_offset": 0, 00:08:58.623 "data_size": 65536 00:08:58.623 } 00:08:58.623 ] 00:08:58.623 }' 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.623 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.191 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.191 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.191 08:46:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.191 08:46:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.191 [2024-09-28 08:46:37.081595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.191 BaseBdev1 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.191 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.192 [ 00:08:59.192 { 00:08:59.192 "name": "BaseBdev1", 00:08:59.192 "aliases": [ 00:08:59.192 "d7f0e23e-0c3d-4159-887a-e0863556102c" 00:08:59.192 ], 00:08:59.192 "product_name": "Malloc disk", 00:08:59.192 "block_size": 512, 00:08:59.192 "num_blocks": 65536, 00:08:59.192 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:08:59.192 "assigned_rate_limits": { 00:08:59.192 "rw_ios_per_sec": 0, 00:08:59.192 "rw_mbytes_per_sec": 0, 00:08:59.192 "r_mbytes_per_sec": 0, 00:08:59.192 "w_mbytes_per_sec": 0 00:08:59.192 }, 00:08:59.192 "claimed": true, 00:08:59.192 "claim_type": "exclusive_write", 00:08:59.192 "zoned": false, 00:08:59.192 "supported_io_types": { 00:08:59.192 "read": true, 00:08:59.192 "write": true, 00:08:59.192 "unmap": true, 00:08:59.192 "flush": true, 00:08:59.192 "reset": true, 00:08:59.192 "nvme_admin": false, 00:08:59.192 "nvme_io": false, 00:08:59.192 "nvme_io_md": false, 00:08:59.192 "write_zeroes": true, 00:08:59.192 "zcopy": true, 00:08:59.192 "get_zone_info": false, 00:08:59.192 "zone_management": false, 00:08:59.192 "zone_append": false, 00:08:59.192 "compare": false, 00:08:59.192 "compare_and_write": false, 00:08:59.192 "abort": true, 00:08:59.192 "seek_hole": false, 00:08:59.192 "seek_data": false, 00:08:59.192 "copy": true, 00:08:59.192 "nvme_iov_md": false 00:08:59.192 }, 00:08:59.192 "memory_domains": [ 00:08:59.192 { 00:08:59.192 "dma_device_id": "system", 00:08:59.192 "dma_device_type": 1 00:08:59.192 }, 00:08:59.192 { 00:08:59.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.192 "dma_device_type": 2 00:08:59.192 } 00:08:59.192 ], 00:08:59.192 "driver_specific": {} 00:08:59.192 } 00:08:59.192 ] 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.192 "name": "Existed_Raid", 00:08:59.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.192 "strip_size_kb": 64, 00:08:59.192 "state": "configuring", 00:08:59.192 "raid_level": "concat", 00:08:59.192 "superblock": false, 00:08:59.192 "num_base_bdevs": 3, 00:08:59.192 "num_base_bdevs_discovered": 2, 00:08:59.192 "num_base_bdevs_operational": 3, 00:08:59.192 "base_bdevs_list": [ 00:08:59.192 { 00:08:59.192 "name": "BaseBdev1", 00:08:59.192 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:08:59.192 "is_configured": true, 00:08:59.192 "data_offset": 0, 00:08:59.192 "data_size": 65536 00:08:59.192 }, 00:08:59.192 { 00:08:59.192 "name": null, 00:08:59.192 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:08:59.192 "is_configured": false, 00:08:59.192 "data_offset": 0, 00:08:59.192 "data_size": 65536 00:08:59.192 }, 00:08:59.192 { 00:08:59.192 "name": "BaseBdev3", 00:08:59.192 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:08:59.192 "is_configured": true, 00:08:59.192 "data_offset": 0, 00:08:59.192 "data_size": 65536 00:08:59.192 } 00:08:59.192 ] 00:08:59.192 }' 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.192 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.762 [2024-09-28 08:46:37.600759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.762 "name": "Existed_Raid", 00:08:59.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.762 "strip_size_kb": 64, 00:08:59.762 "state": "configuring", 00:08:59.762 "raid_level": "concat", 00:08:59.762 "superblock": false, 00:08:59.762 "num_base_bdevs": 3, 00:08:59.762 "num_base_bdevs_discovered": 1, 00:08:59.762 "num_base_bdevs_operational": 3, 00:08:59.762 "base_bdevs_list": [ 00:08:59.762 { 00:08:59.762 "name": "BaseBdev1", 00:08:59.762 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:08:59.762 "is_configured": true, 00:08:59.762 "data_offset": 0, 00:08:59.762 "data_size": 65536 00:08:59.762 }, 00:08:59.762 { 00:08:59.762 "name": null, 00:08:59.762 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:08:59.762 "is_configured": false, 00:08:59.762 "data_offset": 0, 00:08:59.762 "data_size": 65536 00:08:59.762 }, 00:08:59.762 { 00:08:59.762 "name": null, 00:08:59.762 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:08:59.762 "is_configured": false, 00:08:59.762 "data_offset": 0, 00:08:59.762 "data_size": 65536 00:08:59.762 } 00:08:59.762 ] 00:08:59.762 }' 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.762 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.021 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.021 08:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.021 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.021 08:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.281 [2024-09-28 08:46:38.047964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.281 "name": "Existed_Raid", 00:09:00.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.281 "strip_size_kb": 64, 00:09:00.281 "state": "configuring", 00:09:00.281 "raid_level": "concat", 00:09:00.281 "superblock": false, 00:09:00.281 "num_base_bdevs": 3, 00:09:00.281 "num_base_bdevs_discovered": 2, 00:09:00.281 "num_base_bdevs_operational": 3, 00:09:00.281 "base_bdevs_list": [ 00:09:00.281 { 00:09:00.281 "name": "BaseBdev1", 00:09:00.281 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:00.281 "is_configured": true, 00:09:00.281 "data_offset": 0, 00:09:00.281 "data_size": 65536 00:09:00.281 }, 00:09:00.281 { 00:09:00.281 "name": null, 00:09:00.281 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:09:00.281 "is_configured": false, 00:09:00.281 "data_offset": 0, 00:09:00.281 "data_size": 65536 00:09:00.281 }, 00:09:00.281 { 00:09:00.281 "name": "BaseBdev3", 00:09:00.281 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:09:00.281 "is_configured": true, 00:09:00.281 "data_offset": 0, 00:09:00.281 "data_size": 65536 00:09:00.281 } 00:09:00.281 ] 00:09:00.281 }' 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.281 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.541 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.541 [2024-09-28 08:46:38.523280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.800 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.800 "name": "Existed_Raid", 00:09:00.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.800 "strip_size_kb": 64, 00:09:00.800 "state": "configuring", 00:09:00.800 "raid_level": "concat", 00:09:00.800 "superblock": false, 00:09:00.800 "num_base_bdevs": 3, 00:09:00.801 "num_base_bdevs_discovered": 1, 00:09:00.801 "num_base_bdevs_operational": 3, 00:09:00.801 "base_bdevs_list": [ 00:09:00.801 { 00:09:00.801 "name": null, 00:09:00.801 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:00.801 "is_configured": false, 00:09:00.801 "data_offset": 0, 00:09:00.801 "data_size": 65536 00:09:00.801 }, 00:09:00.801 { 00:09:00.801 "name": null, 00:09:00.801 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:09:00.801 "is_configured": false, 00:09:00.801 "data_offset": 0, 00:09:00.801 "data_size": 65536 00:09:00.801 }, 00:09:00.801 { 00:09:00.801 "name": "BaseBdev3", 00:09:00.801 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:09:00.801 "is_configured": true, 00:09:00.801 "data_offset": 0, 00:09:00.801 "data_size": 65536 00:09:00.801 } 00:09:00.801 ] 00:09:00.801 }' 00:09:00.801 08:46:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.801 08:46:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.369 [2024-09-28 08:46:39.109144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.369 "name": "Existed_Raid", 00:09:01.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.369 "strip_size_kb": 64, 00:09:01.369 "state": "configuring", 00:09:01.369 "raid_level": "concat", 00:09:01.369 "superblock": false, 00:09:01.369 "num_base_bdevs": 3, 00:09:01.369 "num_base_bdevs_discovered": 2, 00:09:01.369 "num_base_bdevs_operational": 3, 00:09:01.369 "base_bdevs_list": [ 00:09:01.369 { 00:09:01.369 "name": null, 00:09:01.369 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:01.369 "is_configured": false, 00:09:01.369 "data_offset": 0, 00:09:01.369 "data_size": 65536 00:09:01.369 }, 00:09:01.369 { 00:09:01.369 "name": "BaseBdev2", 00:09:01.369 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:09:01.369 "is_configured": true, 00:09:01.369 "data_offset": 0, 00:09:01.369 "data_size": 65536 00:09:01.369 }, 00:09:01.369 { 00:09:01.369 "name": "BaseBdev3", 00:09:01.369 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:09:01.369 "is_configured": true, 00:09:01.369 "data_offset": 0, 00:09:01.369 "data_size": 65536 00:09:01.369 } 00:09:01.369 ] 00:09:01.369 }' 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.369 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.629 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.629 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.629 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.629 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.629 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7f0e23e-0c3d-4159-887a-e0863556102c 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 [2024-09-28 08:46:39.726163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.889 [2024-09-28 08:46:39.726212] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.889 [2024-09-28 08:46:39.726222] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:01.889 [2024-09-28 08:46:39.726496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:01.889 [2024-09-28 08:46:39.726667] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.889 [2024-09-28 08:46:39.726679] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:01.889 [2024-09-28 08:46:39.726990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.889 NewBaseBdev 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 [ 00:09:01.889 { 00:09:01.889 "name": "NewBaseBdev", 00:09:01.889 "aliases": [ 00:09:01.889 "d7f0e23e-0c3d-4159-887a-e0863556102c" 00:09:01.889 ], 00:09:01.889 "product_name": "Malloc disk", 00:09:01.889 "block_size": 512, 00:09:01.889 "num_blocks": 65536, 00:09:01.889 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:01.889 "assigned_rate_limits": { 00:09:01.889 "rw_ios_per_sec": 0, 00:09:01.889 "rw_mbytes_per_sec": 0, 00:09:01.889 "r_mbytes_per_sec": 0, 00:09:01.889 "w_mbytes_per_sec": 0 00:09:01.889 }, 00:09:01.889 "claimed": true, 00:09:01.889 "claim_type": "exclusive_write", 00:09:01.889 "zoned": false, 00:09:01.889 "supported_io_types": { 00:09:01.889 "read": true, 00:09:01.889 "write": true, 00:09:01.889 "unmap": true, 00:09:01.889 "flush": true, 00:09:01.889 "reset": true, 00:09:01.889 "nvme_admin": false, 00:09:01.889 "nvme_io": false, 00:09:01.889 "nvme_io_md": false, 00:09:01.889 "write_zeroes": true, 00:09:01.889 "zcopy": true, 00:09:01.889 "get_zone_info": false, 00:09:01.889 "zone_management": false, 00:09:01.889 "zone_append": false, 00:09:01.889 "compare": false, 00:09:01.889 "compare_and_write": false, 00:09:01.889 "abort": true, 00:09:01.889 "seek_hole": false, 00:09:01.889 "seek_data": false, 00:09:01.889 "copy": true, 00:09:01.889 "nvme_iov_md": false 00:09:01.889 }, 00:09:01.889 "memory_domains": [ 00:09:01.889 { 00:09:01.889 "dma_device_id": "system", 00:09:01.889 "dma_device_type": 1 00:09:01.889 }, 00:09:01.889 { 00:09:01.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.889 "dma_device_type": 2 00:09:01.889 } 00:09:01.889 ], 00:09:01.889 "driver_specific": {} 00:09:01.889 } 00:09:01.889 ] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.889 "name": "Existed_Raid", 00:09:01.889 "uuid": "39dcdc18-63b7-4919-9f0b-3db4c8de4c45", 00:09:01.889 "strip_size_kb": 64, 00:09:01.889 "state": "online", 00:09:01.889 "raid_level": "concat", 00:09:01.889 "superblock": false, 00:09:01.889 "num_base_bdevs": 3, 00:09:01.889 "num_base_bdevs_discovered": 3, 00:09:01.889 "num_base_bdevs_operational": 3, 00:09:01.889 "base_bdevs_list": [ 00:09:01.889 { 00:09:01.889 "name": "NewBaseBdev", 00:09:01.889 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:01.889 "is_configured": true, 00:09:01.889 "data_offset": 0, 00:09:01.889 "data_size": 65536 00:09:01.889 }, 00:09:01.889 { 00:09:01.889 "name": "BaseBdev2", 00:09:01.889 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:09:01.889 "is_configured": true, 00:09:01.889 "data_offset": 0, 00:09:01.889 "data_size": 65536 00:09:01.889 }, 00:09:01.889 { 00:09:01.889 "name": "BaseBdev3", 00:09:01.889 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:09:01.889 "is_configured": true, 00:09:01.889 "data_offset": 0, 00:09:01.889 "data_size": 65536 00:09:01.889 } 00:09:01.889 ] 00:09:01.889 }' 00:09:01.889 08:46:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.890 08:46:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.459 [2024-09-28 08:46:40.209694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.459 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.459 "name": "Existed_Raid", 00:09:02.459 "aliases": [ 00:09:02.459 "39dcdc18-63b7-4919-9f0b-3db4c8de4c45" 00:09:02.459 ], 00:09:02.459 "product_name": "Raid Volume", 00:09:02.459 "block_size": 512, 00:09:02.459 "num_blocks": 196608, 00:09:02.459 "uuid": "39dcdc18-63b7-4919-9f0b-3db4c8de4c45", 00:09:02.459 "assigned_rate_limits": { 00:09:02.459 "rw_ios_per_sec": 0, 00:09:02.459 "rw_mbytes_per_sec": 0, 00:09:02.459 "r_mbytes_per_sec": 0, 00:09:02.459 "w_mbytes_per_sec": 0 00:09:02.459 }, 00:09:02.459 "claimed": false, 00:09:02.459 "zoned": false, 00:09:02.459 "supported_io_types": { 00:09:02.459 "read": true, 00:09:02.459 "write": true, 00:09:02.459 "unmap": true, 00:09:02.459 "flush": true, 00:09:02.459 "reset": true, 00:09:02.459 "nvme_admin": false, 00:09:02.459 "nvme_io": false, 00:09:02.459 "nvme_io_md": false, 00:09:02.459 "write_zeroes": true, 00:09:02.459 "zcopy": false, 00:09:02.459 "get_zone_info": false, 00:09:02.459 "zone_management": false, 00:09:02.459 "zone_append": false, 00:09:02.459 "compare": false, 00:09:02.459 "compare_and_write": false, 00:09:02.459 "abort": false, 00:09:02.459 "seek_hole": false, 00:09:02.459 "seek_data": false, 00:09:02.459 "copy": false, 00:09:02.459 "nvme_iov_md": false 00:09:02.459 }, 00:09:02.459 "memory_domains": [ 00:09:02.459 { 00:09:02.459 "dma_device_id": "system", 00:09:02.459 "dma_device_type": 1 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.459 "dma_device_type": 2 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "dma_device_id": "system", 00:09:02.459 "dma_device_type": 1 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.459 "dma_device_type": 2 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "dma_device_id": "system", 00:09:02.459 "dma_device_type": 1 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.459 "dma_device_type": 2 00:09:02.459 } 00:09:02.459 ], 00:09:02.459 "driver_specific": { 00:09:02.459 "raid": { 00:09:02.459 "uuid": "39dcdc18-63b7-4919-9f0b-3db4c8de4c45", 00:09:02.459 "strip_size_kb": 64, 00:09:02.459 "state": "online", 00:09:02.459 "raid_level": "concat", 00:09:02.459 "superblock": false, 00:09:02.459 "num_base_bdevs": 3, 00:09:02.459 "num_base_bdevs_discovered": 3, 00:09:02.459 "num_base_bdevs_operational": 3, 00:09:02.459 "base_bdevs_list": [ 00:09:02.459 { 00:09:02.459 "name": "NewBaseBdev", 00:09:02.459 "uuid": "d7f0e23e-0c3d-4159-887a-e0863556102c", 00:09:02.459 "is_configured": true, 00:09:02.459 "data_offset": 0, 00:09:02.459 "data_size": 65536 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "name": "BaseBdev2", 00:09:02.459 "uuid": "786adcf7-7c85-42b8-b574-ca481dc557b5", 00:09:02.459 "is_configured": true, 00:09:02.459 "data_offset": 0, 00:09:02.459 "data_size": 65536 00:09:02.459 }, 00:09:02.459 { 00:09:02.459 "name": "BaseBdev3", 00:09:02.459 "uuid": "db5b4ff3-08d6-4ae4-92c7-a915d07b7f9f", 00:09:02.459 "is_configured": true, 00:09:02.459 "data_offset": 0, 00:09:02.459 "data_size": 65536 00:09:02.459 } 00:09:02.459 ] 00:09:02.459 } 00:09:02.460 } 00:09:02.460 }' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.460 BaseBdev2 00:09:02.460 BaseBdev3' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.460 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.720 [2024-09-28 08:46:40.476896] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.720 [2024-09-28 08:46:40.476964] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.720 [2024-09-28 08:46:40.477066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.720 [2024-09-28 08:46:40.477129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.720 [2024-09-28 08:46:40.477142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65615 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65615 ']' 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65615 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65615 00:09:02.720 killing process with pid 65615 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65615' 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65615 00:09:02.720 [2024-09-28 08:46:40.524726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.720 08:46:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65615 00:09:02.980 [2024-09-28 08:46:40.840064] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.362 ************************************ 00:09:04.362 END TEST raid_state_function_test 00:09:04.362 ************************************ 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.362 00:09:04.362 real 0m10.846s 00:09:04.362 user 0m16.916s 00:09:04.362 sys 0m1.998s 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 08:46:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:04.362 08:46:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:04.362 08:46:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.362 08:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 ************************************ 00:09:04.362 START TEST raid_state_function_test_sb 00:09:04.362 ************************************ 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:04.362 Process raid pid: 66238 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66238 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66238' 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66238 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66238 ']' 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.362 08:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 [2024-09-28 08:46:42.337422] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:04.362 [2024-09-28 08:46:42.337605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.622 [2024-09-28 08:46:42.501873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.882 [2024-09-28 08:46:42.757965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.141 [2024-09-28 08:46:42.990838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.142 [2024-09-28 08:46:42.990965] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 [2024-09-28 08:46:43.171400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.402 [2024-09-28 08:46:43.171455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.402 [2024-09-28 08:46:43.171467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.402 [2024-09-28 08:46:43.171477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.402 [2024-09-28 08:46:43.171483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.402 [2024-09-28 08:46:43.171493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.402 "name": "Existed_Raid", 00:09:05.402 "uuid": "1a60da15-0b70-4325-a0ea-47c780dea3fe", 00:09:05.402 "strip_size_kb": 64, 00:09:05.402 "state": "configuring", 00:09:05.402 "raid_level": "concat", 00:09:05.402 "superblock": true, 00:09:05.402 "num_base_bdevs": 3, 00:09:05.402 "num_base_bdevs_discovered": 0, 00:09:05.402 "num_base_bdevs_operational": 3, 00:09:05.402 "base_bdevs_list": [ 00:09:05.402 { 00:09:05.402 "name": "BaseBdev1", 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.402 "is_configured": false, 00:09:05.402 "data_offset": 0, 00:09:05.402 "data_size": 0 00:09:05.402 }, 00:09:05.402 { 00:09:05.402 "name": "BaseBdev2", 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.402 "is_configured": false, 00:09:05.402 "data_offset": 0, 00:09:05.402 "data_size": 0 00:09:05.402 }, 00:09:05.402 { 00:09:05.402 "name": "BaseBdev3", 00:09:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.402 "is_configured": false, 00:09:05.402 "data_offset": 0, 00:09:05.402 "data_size": 0 00:09:05.402 } 00:09:05.402 ] 00:09:05.402 }' 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.402 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.662 [2024-09-28 08:46:43.646474] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.662 [2024-09-28 08:46:43.646555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.662 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.923 [2024-09-28 08:46:43.658498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.923 [2024-09-28 08:46:43.658580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.923 [2024-09-28 08:46:43.658611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.923 [2024-09-28 08:46:43.658635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.923 [2024-09-28 08:46:43.658671] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.923 [2024-09-28 08:46:43.658701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.923 [2024-09-28 08:46:43.747764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.923 BaseBdev1 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.923 [ 00:09:05.923 { 00:09:05.923 "name": "BaseBdev1", 00:09:05.923 "aliases": [ 00:09:05.923 "6a0792fd-7fd9-416d-8492-a698f1fbbb04" 00:09:05.923 ], 00:09:05.923 "product_name": "Malloc disk", 00:09:05.923 "block_size": 512, 00:09:05.923 "num_blocks": 65536, 00:09:05.923 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:05.923 "assigned_rate_limits": { 00:09:05.923 "rw_ios_per_sec": 0, 00:09:05.923 "rw_mbytes_per_sec": 0, 00:09:05.923 "r_mbytes_per_sec": 0, 00:09:05.923 "w_mbytes_per_sec": 0 00:09:05.923 }, 00:09:05.923 "claimed": true, 00:09:05.923 "claim_type": "exclusive_write", 00:09:05.923 "zoned": false, 00:09:05.923 "supported_io_types": { 00:09:05.923 "read": true, 00:09:05.923 "write": true, 00:09:05.923 "unmap": true, 00:09:05.923 "flush": true, 00:09:05.923 "reset": true, 00:09:05.923 "nvme_admin": false, 00:09:05.923 "nvme_io": false, 00:09:05.923 "nvme_io_md": false, 00:09:05.923 "write_zeroes": true, 00:09:05.923 "zcopy": true, 00:09:05.923 "get_zone_info": false, 00:09:05.923 "zone_management": false, 00:09:05.923 "zone_append": false, 00:09:05.923 "compare": false, 00:09:05.923 "compare_and_write": false, 00:09:05.923 "abort": true, 00:09:05.923 "seek_hole": false, 00:09:05.923 "seek_data": false, 00:09:05.923 "copy": true, 00:09:05.923 "nvme_iov_md": false 00:09:05.923 }, 00:09:05.923 "memory_domains": [ 00:09:05.923 { 00:09:05.923 "dma_device_id": "system", 00:09:05.923 "dma_device_type": 1 00:09:05.923 }, 00:09:05.923 { 00:09:05.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.923 "dma_device_type": 2 00:09:05.923 } 00:09:05.923 ], 00:09:05.923 "driver_specific": {} 00:09:05.923 } 00:09:05.923 ] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.923 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.924 "name": "Existed_Raid", 00:09:05.924 "uuid": "22f3bcdc-8bcb-40ba-b0cb-c32d0f07ba9e", 00:09:05.924 "strip_size_kb": 64, 00:09:05.924 "state": "configuring", 00:09:05.924 "raid_level": "concat", 00:09:05.924 "superblock": true, 00:09:05.924 "num_base_bdevs": 3, 00:09:05.924 "num_base_bdevs_discovered": 1, 00:09:05.924 "num_base_bdevs_operational": 3, 00:09:05.924 "base_bdevs_list": [ 00:09:05.924 { 00:09:05.924 "name": "BaseBdev1", 00:09:05.924 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:05.924 "is_configured": true, 00:09:05.924 "data_offset": 2048, 00:09:05.924 "data_size": 63488 00:09:05.924 }, 00:09:05.924 { 00:09:05.924 "name": "BaseBdev2", 00:09:05.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.924 "is_configured": false, 00:09:05.924 "data_offset": 0, 00:09:05.924 "data_size": 0 00:09:05.924 }, 00:09:05.924 { 00:09:05.924 "name": "BaseBdev3", 00:09:05.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.924 "is_configured": false, 00:09:05.924 "data_offset": 0, 00:09:05.924 "data_size": 0 00:09:05.924 } 00:09:05.924 ] 00:09:05.924 }' 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.924 08:46:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.493 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.493 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.494 [2024-09-28 08:46:44.190993] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.494 [2024-09-28 08:46:44.191083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.494 [2024-09-28 08:46:44.203037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.494 [2024-09-28 08:46:44.205176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.494 [2024-09-28 08:46:44.205229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.494 [2024-09-28 08:46:44.205238] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.494 [2024-09-28 08:46:44.205247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.494 "name": "Existed_Raid", 00:09:06.494 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:06.494 "strip_size_kb": 64, 00:09:06.494 "state": "configuring", 00:09:06.494 "raid_level": "concat", 00:09:06.494 "superblock": true, 00:09:06.494 "num_base_bdevs": 3, 00:09:06.494 "num_base_bdevs_discovered": 1, 00:09:06.494 "num_base_bdevs_operational": 3, 00:09:06.494 "base_bdevs_list": [ 00:09:06.494 { 00:09:06.494 "name": "BaseBdev1", 00:09:06.494 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:06.494 "is_configured": true, 00:09:06.494 "data_offset": 2048, 00:09:06.494 "data_size": 63488 00:09:06.494 }, 00:09:06.494 { 00:09:06.494 "name": "BaseBdev2", 00:09:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.494 "is_configured": false, 00:09:06.494 "data_offset": 0, 00:09:06.494 "data_size": 0 00:09:06.494 }, 00:09:06.494 { 00:09:06.494 "name": "BaseBdev3", 00:09:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.494 "is_configured": false, 00:09:06.494 "data_offset": 0, 00:09:06.494 "data_size": 0 00:09:06.494 } 00:09:06.494 ] 00:09:06.494 }' 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.494 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.754 [2024-09-28 08:46:44.691741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.754 BaseBdev2 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.754 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.754 [ 00:09:06.754 { 00:09:06.754 "name": "BaseBdev2", 00:09:06.754 "aliases": [ 00:09:06.754 "1a4918a6-f8e6-4249-b33d-c4491c74456a" 00:09:06.754 ], 00:09:06.754 "product_name": "Malloc disk", 00:09:06.754 "block_size": 512, 00:09:06.754 "num_blocks": 65536, 00:09:06.754 "uuid": "1a4918a6-f8e6-4249-b33d-c4491c74456a", 00:09:06.754 "assigned_rate_limits": { 00:09:06.754 "rw_ios_per_sec": 0, 00:09:06.754 "rw_mbytes_per_sec": 0, 00:09:06.754 "r_mbytes_per_sec": 0, 00:09:06.754 "w_mbytes_per_sec": 0 00:09:06.754 }, 00:09:06.754 "claimed": true, 00:09:06.754 "claim_type": "exclusive_write", 00:09:06.754 "zoned": false, 00:09:06.754 "supported_io_types": { 00:09:06.754 "read": true, 00:09:06.754 "write": true, 00:09:06.754 "unmap": true, 00:09:06.754 "flush": true, 00:09:06.754 "reset": true, 00:09:06.754 "nvme_admin": false, 00:09:06.754 "nvme_io": false, 00:09:06.754 "nvme_io_md": false, 00:09:06.754 "write_zeroes": true, 00:09:06.754 "zcopy": true, 00:09:06.754 "get_zone_info": false, 00:09:06.754 "zone_management": false, 00:09:06.754 "zone_append": false, 00:09:06.754 "compare": false, 00:09:06.754 "compare_and_write": false, 00:09:06.754 "abort": true, 00:09:06.754 "seek_hole": false, 00:09:06.754 "seek_data": false, 00:09:06.754 "copy": true, 00:09:06.755 "nvme_iov_md": false 00:09:06.755 }, 00:09:06.755 "memory_domains": [ 00:09:06.755 { 00:09:06.755 "dma_device_id": "system", 00:09:06.755 "dma_device_type": 1 00:09:06.755 }, 00:09:06.755 { 00:09:06.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.755 "dma_device_type": 2 00:09:06.755 } 00:09:06.755 ], 00:09:06.755 "driver_specific": {} 00:09:06.755 } 00:09:06.755 ] 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.755 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.014 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.014 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.014 "name": "Existed_Raid", 00:09:07.014 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:07.014 "strip_size_kb": 64, 00:09:07.014 "state": "configuring", 00:09:07.014 "raid_level": "concat", 00:09:07.014 "superblock": true, 00:09:07.014 "num_base_bdevs": 3, 00:09:07.014 "num_base_bdevs_discovered": 2, 00:09:07.014 "num_base_bdevs_operational": 3, 00:09:07.014 "base_bdevs_list": [ 00:09:07.014 { 00:09:07.014 "name": "BaseBdev1", 00:09:07.014 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:07.014 "is_configured": true, 00:09:07.014 "data_offset": 2048, 00:09:07.014 "data_size": 63488 00:09:07.014 }, 00:09:07.014 { 00:09:07.014 "name": "BaseBdev2", 00:09:07.014 "uuid": "1a4918a6-f8e6-4249-b33d-c4491c74456a", 00:09:07.014 "is_configured": true, 00:09:07.014 "data_offset": 2048, 00:09:07.014 "data_size": 63488 00:09:07.014 }, 00:09:07.014 { 00:09:07.014 "name": "BaseBdev3", 00:09:07.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.014 "is_configured": false, 00:09:07.014 "data_offset": 0, 00:09:07.014 "data_size": 0 00:09:07.014 } 00:09:07.014 ] 00:09:07.014 }' 00:09:07.014 08:46:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.014 08:46:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.274 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.274 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 [2024-09-28 08:46:45.187286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.275 [2024-09-28 08:46:45.187559] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.275 [2024-09-28 08:46:45.187586] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.275 [2024-09-28 08:46:45.187907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.275 BaseBdev3 00:09:07.275 [2024-09-28 08:46:45.188263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.275 [2024-09-28 08:46:45.188289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:07.275 [2024-09-28 08:46:45.188452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 [ 00:09:07.275 { 00:09:07.275 "name": "BaseBdev3", 00:09:07.275 "aliases": [ 00:09:07.275 "4cc853d4-f8ee-4935-86c9-f10efb8bddab" 00:09:07.275 ], 00:09:07.275 "product_name": "Malloc disk", 00:09:07.275 "block_size": 512, 00:09:07.275 "num_blocks": 65536, 00:09:07.275 "uuid": "4cc853d4-f8ee-4935-86c9-f10efb8bddab", 00:09:07.275 "assigned_rate_limits": { 00:09:07.275 "rw_ios_per_sec": 0, 00:09:07.275 "rw_mbytes_per_sec": 0, 00:09:07.275 "r_mbytes_per_sec": 0, 00:09:07.275 "w_mbytes_per_sec": 0 00:09:07.275 }, 00:09:07.275 "claimed": true, 00:09:07.275 "claim_type": "exclusive_write", 00:09:07.275 "zoned": false, 00:09:07.275 "supported_io_types": { 00:09:07.275 "read": true, 00:09:07.275 "write": true, 00:09:07.275 "unmap": true, 00:09:07.275 "flush": true, 00:09:07.275 "reset": true, 00:09:07.275 "nvme_admin": false, 00:09:07.275 "nvme_io": false, 00:09:07.275 "nvme_io_md": false, 00:09:07.275 "write_zeroes": true, 00:09:07.275 "zcopy": true, 00:09:07.275 "get_zone_info": false, 00:09:07.275 "zone_management": false, 00:09:07.275 "zone_append": false, 00:09:07.275 "compare": false, 00:09:07.275 "compare_and_write": false, 00:09:07.275 "abort": true, 00:09:07.275 "seek_hole": false, 00:09:07.275 "seek_data": false, 00:09:07.275 "copy": true, 00:09:07.275 "nvme_iov_md": false 00:09:07.275 }, 00:09:07.275 "memory_domains": [ 00:09:07.275 { 00:09:07.275 "dma_device_id": "system", 00:09:07.275 "dma_device_type": 1 00:09:07.275 }, 00:09:07.275 { 00:09:07.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.275 "dma_device_type": 2 00:09:07.275 } 00:09:07.275 ], 00:09:07.275 "driver_specific": {} 00:09:07.275 } 00:09:07.275 ] 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.535 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.535 "name": "Existed_Raid", 00:09:07.535 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:07.535 "strip_size_kb": 64, 00:09:07.535 "state": "online", 00:09:07.535 "raid_level": "concat", 00:09:07.535 "superblock": true, 00:09:07.535 "num_base_bdevs": 3, 00:09:07.535 "num_base_bdevs_discovered": 3, 00:09:07.535 "num_base_bdevs_operational": 3, 00:09:07.535 "base_bdevs_list": [ 00:09:07.535 { 00:09:07.535 "name": "BaseBdev1", 00:09:07.535 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:07.535 "is_configured": true, 00:09:07.535 "data_offset": 2048, 00:09:07.535 "data_size": 63488 00:09:07.535 }, 00:09:07.535 { 00:09:07.535 "name": "BaseBdev2", 00:09:07.535 "uuid": "1a4918a6-f8e6-4249-b33d-c4491c74456a", 00:09:07.535 "is_configured": true, 00:09:07.535 "data_offset": 2048, 00:09:07.535 "data_size": 63488 00:09:07.535 }, 00:09:07.535 { 00:09:07.535 "name": "BaseBdev3", 00:09:07.535 "uuid": "4cc853d4-f8ee-4935-86c9-f10efb8bddab", 00:09:07.535 "is_configured": true, 00:09:07.535 "data_offset": 2048, 00:09:07.535 "data_size": 63488 00:09:07.535 } 00:09:07.535 ] 00:09:07.535 }' 00:09:07.535 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.535 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.795 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.795 [2024-09-28 08:46:45.718834] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.796 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.796 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.796 "name": "Existed_Raid", 00:09:07.796 "aliases": [ 00:09:07.796 "3b431acb-61b1-4c6a-a1c0-66d473d71bab" 00:09:07.796 ], 00:09:07.796 "product_name": "Raid Volume", 00:09:07.796 "block_size": 512, 00:09:07.796 "num_blocks": 190464, 00:09:07.796 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:07.796 "assigned_rate_limits": { 00:09:07.796 "rw_ios_per_sec": 0, 00:09:07.796 "rw_mbytes_per_sec": 0, 00:09:07.796 "r_mbytes_per_sec": 0, 00:09:07.796 "w_mbytes_per_sec": 0 00:09:07.796 }, 00:09:07.796 "claimed": false, 00:09:07.796 "zoned": false, 00:09:07.796 "supported_io_types": { 00:09:07.796 "read": true, 00:09:07.796 "write": true, 00:09:07.796 "unmap": true, 00:09:07.796 "flush": true, 00:09:07.796 "reset": true, 00:09:07.796 "nvme_admin": false, 00:09:07.796 "nvme_io": false, 00:09:07.796 "nvme_io_md": false, 00:09:07.796 "write_zeroes": true, 00:09:07.796 "zcopy": false, 00:09:07.796 "get_zone_info": false, 00:09:07.796 "zone_management": false, 00:09:07.796 "zone_append": false, 00:09:07.796 "compare": false, 00:09:07.796 "compare_and_write": false, 00:09:07.796 "abort": false, 00:09:07.796 "seek_hole": false, 00:09:07.796 "seek_data": false, 00:09:07.796 "copy": false, 00:09:07.796 "nvme_iov_md": false 00:09:07.796 }, 00:09:07.796 "memory_domains": [ 00:09:07.796 { 00:09:07.796 "dma_device_id": "system", 00:09:07.796 "dma_device_type": 1 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.796 "dma_device_type": 2 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "dma_device_id": "system", 00:09:07.796 "dma_device_type": 1 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.796 "dma_device_type": 2 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "dma_device_id": "system", 00:09:07.796 "dma_device_type": 1 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.796 "dma_device_type": 2 00:09:07.796 } 00:09:07.796 ], 00:09:07.796 "driver_specific": { 00:09:07.796 "raid": { 00:09:07.796 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:07.796 "strip_size_kb": 64, 00:09:07.796 "state": "online", 00:09:07.796 "raid_level": "concat", 00:09:07.796 "superblock": true, 00:09:07.796 "num_base_bdevs": 3, 00:09:07.796 "num_base_bdevs_discovered": 3, 00:09:07.796 "num_base_bdevs_operational": 3, 00:09:07.796 "base_bdevs_list": [ 00:09:07.796 { 00:09:07.796 "name": "BaseBdev1", 00:09:07.796 "uuid": "6a0792fd-7fd9-416d-8492-a698f1fbbb04", 00:09:07.796 "is_configured": true, 00:09:07.796 "data_offset": 2048, 00:09:07.796 "data_size": 63488 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "name": "BaseBdev2", 00:09:07.796 "uuid": "1a4918a6-f8e6-4249-b33d-c4491c74456a", 00:09:07.796 "is_configured": true, 00:09:07.796 "data_offset": 2048, 00:09:07.796 "data_size": 63488 00:09:07.796 }, 00:09:07.796 { 00:09:07.796 "name": "BaseBdev3", 00:09:07.796 "uuid": "4cc853d4-f8ee-4935-86c9-f10efb8bddab", 00:09:07.796 "is_configured": true, 00:09:07.796 "data_offset": 2048, 00:09:07.796 "data_size": 63488 00:09:07.796 } 00:09:07.796 ] 00:09:07.796 } 00:09:07.796 } 00:09:07.796 }' 00:09:07.796 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.055 BaseBdev2 00:09:08.055 BaseBdev3' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.055 08:46:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.055 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.055 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.055 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.055 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.055 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.055 [2024-09-28 08:46:46.034008] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.055 [2024-09-28 08:46:46.034040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.055 [2024-09-28 08:46:46.034093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.315 "name": "Existed_Raid", 00:09:08.315 "uuid": "3b431acb-61b1-4c6a-a1c0-66d473d71bab", 00:09:08.315 "strip_size_kb": 64, 00:09:08.315 "state": "offline", 00:09:08.315 "raid_level": "concat", 00:09:08.315 "superblock": true, 00:09:08.315 "num_base_bdevs": 3, 00:09:08.315 "num_base_bdevs_discovered": 2, 00:09:08.315 "num_base_bdevs_operational": 2, 00:09:08.315 "base_bdevs_list": [ 00:09:08.315 { 00:09:08.315 "name": null, 00:09:08.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.315 "is_configured": false, 00:09:08.315 "data_offset": 0, 00:09:08.315 "data_size": 63488 00:09:08.315 }, 00:09:08.315 { 00:09:08.315 "name": "BaseBdev2", 00:09:08.315 "uuid": "1a4918a6-f8e6-4249-b33d-c4491c74456a", 00:09:08.315 "is_configured": true, 00:09:08.315 "data_offset": 2048, 00:09:08.315 "data_size": 63488 00:09:08.315 }, 00:09:08.315 { 00:09:08.315 "name": "BaseBdev3", 00:09:08.315 "uuid": "4cc853d4-f8ee-4935-86c9-f10efb8bddab", 00:09:08.315 "is_configured": true, 00:09:08.315 "data_offset": 2048, 00:09:08.315 "data_size": 63488 00:09:08.315 } 00:09:08.315 ] 00:09:08.315 }' 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.315 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.594 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.594 [2024-09-28 08:46:46.552648] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.881 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.882 [2024-09-28 08:46:46.717508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.882 [2024-09-28 08:46:46.717561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.882 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.142 BaseBdev2 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.142 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.142 [ 00:09:09.142 { 00:09:09.142 "name": "BaseBdev2", 00:09:09.142 "aliases": [ 00:09:09.142 "a18b2712-95b1-474e-ae98-a34c25db941c" 00:09:09.142 ], 00:09:09.142 "product_name": "Malloc disk", 00:09:09.142 "block_size": 512, 00:09:09.142 "num_blocks": 65536, 00:09:09.142 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:09.142 "assigned_rate_limits": { 00:09:09.142 "rw_ios_per_sec": 0, 00:09:09.142 "rw_mbytes_per_sec": 0, 00:09:09.142 "r_mbytes_per_sec": 0, 00:09:09.142 "w_mbytes_per_sec": 0 00:09:09.142 }, 00:09:09.142 "claimed": false, 00:09:09.142 "zoned": false, 00:09:09.142 "supported_io_types": { 00:09:09.142 "read": true, 00:09:09.142 "write": true, 00:09:09.142 "unmap": true, 00:09:09.142 "flush": true, 00:09:09.142 "reset": true, 00:09:09.142 "nvme_admin": false, 00:09:09.142 "nvme_io": false, 00:09:09.143 "nvme_io_md": false, 00:09:09.143 "write_zeroes": true, 00:09:09.143 "zcopy": true, 00:09:09.143 "get_zone_info": false, 00:09:09.143 "zone_management": false, 00:09:09.143 "zone_append": false, 00:09:09.143 "compare": false, 00:09:09.143 "compare_and_write": false, 00:09:09.143 "abort": true, 00:09:09.143 "seek_hole": false, 00:09:09.143 "seek_data": false, 00:09:09.143 "copy": true, 00:09:09.143 "nvme_iov_md": false 00:09:09.143 }, 00:09:09.143 "memory_domains": [ 00:09:09.143 { 00:09:09.143 "dma_device_id": "system", 00:09:09.143 "dma_device_type": 1 00:09:09.143 }, 00:09:09.143 { 00:09:09.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.143 "dma_device_type": 2 00:09:09.143 } 00:09:09.143 ], 00:09:09.143 "driver_specific": {} 00:09:09.143 } 00:09:09.143 ] 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.143 08:46:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 BaseBdev3 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 [ 00:09:09.143 { 00:09:09.143 "name": "BaseBdev3", 00:09:09.143 "aliases": [ 00:09:09.143 "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae" 00:09:09.143 ], 00:09:09.143 "product_name": "Malloc disk", 00:09:09.143 "block_size": 512, 00:09:09.143 "num_blocks": 65536, 00:09:09.143 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:09.143 "assigned_rate_limits": { 00:09:09.143 "rw_ios_per_sec": 0, 00:09:09.143 "rw_mbytes_per_sec": 0, 00:09:09.143 "r_mbytes_per_sec": 0, 00:09:09.143 "w_mbytes_per_sec": 0 00:09:09.143 }, 00:09:09.143 "claimed": false, 00:09:09.143 "zoned": false, 00:09:09.143 "supported_io_types": { 00:09:09.143 "read": true, 00:09:09.143 "write": true, 00:09:09.143 "unmap": true, 00:09:09.143 "flush": true, 00:09:09.143 "reset": true, 00:09:09.143 "nvme_admin": false, 00:09:09.143 "nvme_io": false, 00:09:09.143 "nvme_io_md": false, 00:09:09.143 "write_zeroes": true, 00:09:09.143 "zcopy": true, 00:09:09.143 "get_zone_info": false, 00:09:09.143 "zone_management": false, 00:09:09.143 "zone_append": false, 00:09:09.143 "compare": false, 00:09:09.143 "compare_and_write": false, 00:09:09.143 "abort": true, 00:09:09.143 "seek_hole": false, 00:09:09.143 "seek_data": false, 00:09:09.143 "copy": true, 00:09:09.143 "nvme_iov_md": false 00:09:09.143 }, 00:09:09.143 "memory_domains": [ 00:09:09.143 { 00:09:09.143 "dma_device_id": "system", 00:09:09.143 "dma_device_type": 1 00:09:09.143 }, 00:09:09.143 { 00:09:09.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.143 "dma_device_type": 2 00:09:09.143 } 00:09:09.143 ], 00:09:09.143 "driver_specific": {} 00:09:09.143 } 00:09:09.143 ] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 [2024-09-28 08:46:47.051757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.143 [2024-09-28 08:46:47.051839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.143 [2024-09-28 08:46:47.051886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.143 [2024-09-28 08:46:47.053885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.143 "name": "Existed_Raid", 00:09:09.143 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:09.143 "strip_size_kb": 64, 00:09:09.143 "state": "configuring", 00:09:09.143 "raid_level": "concat", 00:09:09.143 "superblock": true, 00:09:09.143 "num_base_bdevs": 3, 00:09:09.143 "num_base_bdevs_discovered": 2, 00:09:09.143 "num_base_bdevs_operational": 3, 00:09:09.143 "base_bdevs_list": [ 00:09:09.143 { 00:09:09.143 "name": "BaseBdev1", 00:09:09.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.143 "is_configured": false, 00:09:09.143 "data_offset": 0, 00:09:09.143 "data_size": 0 00:09:09.143 }, 00:09:09.143 { 00:09:09.143 "name": "BaseBdev2", 00:09:09.143 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:09.143 "is_configured": true, 00:09:09.143 "data_offset": 2048, 00:09:09.143 "data_size": 63488 00:09:09.143 }, 00:09:09.143 { 00:09:09.143 "name": "BaseBdev3", 00:09:09.143 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:09.143 "is_configured": true, 00:09:09.143 "data_offset": 2048, 00:09:09.143 "data_size": 63488 00:09:09.143 } 00:09:09.143 ] 00:09:09.143 }' 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.143 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.713 [2024-09-28 08:46:47.459041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.713 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.713 "name": "Existed_Raid", 00:09:09.713 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:09.713 "strip_size_kb": 64, 00:09:09.713 "state": "configuring", 00:09:09.713 "raid_level": "concat", 00:09:09.713 "superblock": true, 00:09:09.713 "num_base_bdevs": 3, 00:09:09.713 "num_base_bdevs_discovered": 1, 00:09:09.713 "num_base_bdevs_operational": 3, 00:09:09.713 "base_bdevs_list": [ 00:09:09.713 { 00:09:09.713 "name": "BaseBdev1", 00:09:09.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.713 "is_configured": false, 00:09:09.713 "data_offset": 0, 00:09:09.713 "data_size": 0 00:09:09.713 }, 00:09:09.713 { 00:09:09.713 "name": null, 00:09:09.713 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:09.713 "is_configured": false, 00:09:09.713 "data_offset": 0, 00:09:09.713 "data_size": 63488 00:09:09.713 }, 00:09:09.713 { 00:09:09.713 "name": "BaseBdev3", 00:09:09.713 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:09.713 "is_configured": true, 00:09:09.714 "data_offset": 2048, 00:09:09.714 "data_size": 63488 00:09:09.714 } 00:09:09.714 ] 00:09:09.714 }' 00:09:09.714 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.714 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.973 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.233 [2024-09-28 08:46:47.988534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.233 BaseBdev1 00:09:10.233 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.233 08:46:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:10.233 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.234 08:46:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.234 [ 00:09:10.234 { 00:09:10.234 "name": "BaseBdev1", 00:09:10.234 "aliases": [ 00:09:10.234 "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1" 00:09:10.234 ], 00:09:10.234 "product_name": "Malloc disk", 00:09:10.234 "block_size": 512, 00:09:10.234 "num_blocks": 65536, 00:09:10.234 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:10.234 "assigned_rate_limits": { 00:09:10.234 "rw_ios_per_sec": 0, 00:09:10.234 "rw_mbytes_per_sec": 0, 00:09:10.234 "r_mbytes_per_sec": 0, 00:09:10.234 "w_mbytes_per_sec": 0 00:09:10.234 }, 00:09:10.234 "claimed": true, 00:09:10.234 "claim_type": "exclusive_write", 00:09:10.234 "zoned": false, 00:09:10.234 "supported_io_types": { 00:09:10.234 "read": true, 00:09:10.234 "write": true, 00:09:10.234 "unmap": true, 00:09:10.234 "flush": true, 00:09:10.234 "reset": true, 00:09:10.234 "nvme_admin": false, 00:09:10.234 "nvme_io": false, 00:09:10.234 "nvme_io_md": false, 00:09:10.234 "write_zeroes": true, 00:09:10.234 "zcopy": true, 00:09:10.234 "get_zone_info": false, 00:09:10.234 "zone_management": false, 00:09:10.234 "zone_append": false, 00:09:10.234 "compare": false, 00:09:10.234 "compare_and_write": false, 00:09:10.234 "abort": true, 00:09:10.234 "seek_hole": false, 00:09:10.234 "seek_data": false, 00:09:10.234 "copy": true, 00:09:10.234 "nvme_iov_md": false 00:09:10.234 }, 00:09:10.234 "memory_domains": [ 00:09:10.234 { 00:09:10.234 "dma_device_id": "system", 00:09:10.234 "dma_device_type": 1 00:09:10.234 }, 00:09:10.234 { 00:09:10.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.234 "dma_device_type": 2 00:09:10.234 } 00:09:10.234 ], 00:09:10.234 "driver_specific": {} 00:09:10.234 } 00:09:10.234 ] 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.234 "name": "Existed_Raid", 00:09:10.234 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:10.234 "strip_size_kb": 64, 00:09:10.234 "state": "configuring", 00:09:10.234 "raid_level": "concat", 00:09:10.234 "superblock": true, 00:09:10.234 "num_base_bdevs": 3, 00:09:10.234 "num_base_bdevs_discovered": 2, 00:09:10.234 "num_base_bdevs_operational": 3, 00:09:10.234 "base_bdevs_list": [ 00:09:10.234 { 00:09:10.234 "name": "BaseBdev1", 00:09:10.234 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:10.234 "is_configured": true, 00:09:10.234 "data_offset": 2048, 00:09:10.234 "data_size": 63488 00:09:10.234 }, 00:09:10.234 { 00:09:10.234 "name": null, 00:09:10.234 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:10.234 "is_configured": false, 00:09:10.234 "data_offset": 0, 00:09:10.234 "data_size": 63488 00:09:10.234 }, 00:09:10.234 { 00:09:10.234 "name": "BaseBdev3", 00:09:10.234 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:10.234 "is_configured": true, 00:09:10.234 "data_offset": 2048, 00:09:10.234 "data_size": 63488 00:09:10.234 } 00:09:10.234 ] 00:09:10.234 }' 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.234 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.803 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.803 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.803 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.803 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.804 [2024-09-28 08:46:48.567574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.804 "name": "Existed_Raid", 00:09:10.804 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:10.804 "strip_size_kb": 64, 00:09:10.804 "state": "configuring", 00:09:10.804 "raid_level": "concat", 00:09:10.804 "superblock": true, 00:09:10.804 "num_base_bdevs": 3, 00:09:10.804 "num_base_bdevs_discovered": 1, 00:09:10.804 "num_base_bdevs_operational": 3, 00:09:10.804 "base_bdevs_list": [ 00:09:10.804 { 00:09:10.804 "name": "BaseBdev1", 00:09:10.804 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:10.804 "is_configured": true, 00:09:10.804 "data_offset": 2048, 00:09:10.804 "data_size": 63488 00:09:10.804 }, 00:09:10.804 { 00:09:10.804 "name": null, 00:09:10.804 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:10.804 "is_configured": false, 00:09:10.804 "data_offset": 0, 00:09:10.804 "data_size": 63488 00:09:10.804 }, 00:09:10.804 { 00:09:10.804 "name": null, 00:09:10.804 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:10.804 "is_configured": false, 00:09:10.804 "data_offset": 0, 00:09:10.804 "data_size": 63488 00:09:10.804 } 00:09:10.804 ] 00:09:10.804 }' 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.804 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.064 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.064 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.064 08:46:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.064 08:46:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.064 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.064 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:11.064 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:11.064 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.064 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.064 [2024-09-28 08:46:49.054801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.324 "name": "Existed_Raid", 00:09:11.324 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:11.324 "strip_size_kb": 64, 00:09:11.324 "state": "configuring", 00:09:11.324 "raid_level": "concat", 00:09:11.324 "superblock": true, 00:09:11.324 "num_base_bdevs": 3, 00:09:11.324 "num_base_bdevs_discovered": 2, 00:09:11.324 "num_base_bdevs_operational": 3, 00:09:11.324 "base_bdevs_list": [ 00:09:11.324 { 00:09:11.324 "name": "BaseBdev1", 00:09:11.324 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:11.324 "is_configured": true, 00:09:11.324 "data_offset": 2048, 00:09:11.324 "data_size": 63488 00:09:11.324 }, 00:09:11.324 { 00:09:11.324 "name": null, 00:09:11.324 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:11.324 "is_configured": false, 00:09:11.324 "data_offset": 0, 00:09:11.324 "data_size": 63488 00:09:11.324 }, 00:09:11.324 { 00:09:11.324 "name": "BaseBdev3", 00:09:11.324 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:11.324 "is_configured": true, 00:09:11.324 "data_offset": 2048, 00:09:11.324 "data_size": 63488 00:09:11.324 } 00:09:11.324 ] 00:09:11.324 }' 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.324 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.584 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.584 [2024-09-28 08:46:49.510053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.843 "name": "Existed_Raid", 00:09:11.843 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:11.843 "strip_size_kb": 64, 00:09:11.843 "state": "configuring", 00:09:11.843 "raid_level": "concat", 00:09:11.843 "superblock": true, 00:09:11.843 "num_base_bdevs": 3, 00:09:11.843 "num_base_bdevs_discovered": 1, 00:09:11.843 "num_base_bdevs_operational": 3, 00:09:11.843 "base_bdevs_list": [ 00:09:11.843 { 00:09:11.843 "name": null, 00:09:11.843 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:11.843 "is_configured": false, 00:09:11.843 "data_offset": 0, 00:09:11.843 "data_size": 63488 00:09:11.843 }, 00:09:11.843 { 00:09:11.843 "name": null, 00:09:11.843 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:11.843 "is_configured": false, 00:09:11.843 "data_offset": 0, 00:09:11.843 "data_size": 63488 00:09:11.843 }, 00:09:11.843 { 00:09:11.843 "name": "BaseBdev3", 00:09:11.843 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:11.843 "is_configured": true, 00:09:11.843 "data_offset": 2048, 00:09:11.843 "data_size": 63488 00:09:11.843 } 00:09:11.843 ] 00:09:11.843 }' 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.843 08:46:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.103 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.103 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.103 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.103 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.362 [2024-09-28 08:46:50.127390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.362 "name": "Existed_Raid", 00:09:12.362 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:12.362 "strip_size_kb": 64, 00:09:12.362 "state": "configuring", 00:09:12.362 "raid_level": "concat", 00:09:12.362 "superblock": true, 00:09:12.362 "num_base_bdevs": 3, 00:09:12.362 "num_base_bdevs_discovered": 2, 00:09:12.362 "num_base_bdevs_operational": 3, 00:09:12.362 "base_bdevs_list": [ 00:09:12.362 { 00:09:12.362 "name": null, 00:09:12.362 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:12.362 "is_configured": false, 00:09:12.362 "data_offset": 0, 00:09:12.362 "data_size": 63488 00:09:12.362 }, 00:09:12.362 { 00:09:12.362 "name": "BaseBdev2", 00:09:12.362 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:12.362 "is_configured": true, 00:09:12.362 "data_offset": 2048, 00:09:12.362 "data_size": 63488 00:09:12.362 }, 00:09:12.362 { 00:09:12.362 "name": "BaseBdev3", 00:09:12.362 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:12.362 "is_configured": true, 00:09:12.362 "data_offset": 2048, 00:09:12.362 "data_size": 63488 00:09:12.362 } 00:09:12.362 ] 00:09:12.362 }' 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.362 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.622 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e15eb264-6bc8-462d-bb4d-90cd8ebd83f1 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.882 [2024-09-28 08:46:50.708645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.882 [2024-09-28 08:46:50.708946] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:12.882 [2024-09-28 08:46:50.708964] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.882 [2024-09-28 08:46:50.709243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.882 [2024-09-28 08:46:50.709402] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:12.882 [2024-09-28 08:46:50.709411] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:12.882 [2024-09-28 08:46:50.709563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.882 NewBaseBdev 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.882 [ 00:09:12.882 { 00:09:12.882 "name": "NewBaseBdev", 00:09:12.882 "aliases": [ 00:09:12.882 "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1" 00:09:12.882 ], 00:09:12.882 "product_name": "Malloc disk", 00:09:12.882 "block_size": 512, 00:09:12.882 "num_blocks": 65536, 00:09:12.882 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:12.882 "assigned_rate_limits": { 00:09:12.882 "rw_ios_per_sec": 0, 00:09:12.882 "rw_mbytes_per_sec": 0, 00:09:12.882 "r_mbytes_per_sec": 0, 00:09:12.882 "w_mbytes_per_sec": 0 00:09:12.882 }, 00:09:12.882 "claimed": true, 00:09:12.882 "claim_type": "exclusive_write", 00:09:12.882 "zoned": false, 00:09:12.882 "supported_io_types": { 00:09:12.882 "read": true, 00:09:12.882 "write": true, 00:09:12.882 "unmap": true, 00:09:12.882 "flush": true, 00:09:12.882 "reset": true, 00:09:12.882 "nvme_admin": false, 00:09:12.882 "nvme_io": false, 00:09:12.882 "nvme_io_md": false, 00:09:12.882 "write_zeroes": true, 00:09:12.882 "zcopy": true, 00:09:12.882 "get_zone_info": false, 00:09:12.882 "zone_management": false, 00:09:12.882 "zone_append": false, 00:09:12.882 "compare": false, 00:09:12.882 "compare_and_write": false, 00:09:12.882 "abort": true, 00:09:12.882 "seek_hole": false, 00:09:12.882 "seek_data": false, 00:09:12.882 "copy": true, 00:09:12.882 "nvme_iov_md": false 00:09:12.882 }, 00:09:12.882 "memory_domains": [ 00:09:12.882 { 00:09:12.882 "dma_device_id": "system", 00:09:12.882 "dma_device_type": 1 00:09:12.882 }, 00:09:12.882 { 00:09:12.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.882 "dma_device_type": 2 00:09:12.882 } 00:09:12.882 ], 00:09:12.882 "driver_specific": {} 00:09:12.882 } 00:09:12.882 ] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.882 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.882 "name": "Existed_Raid", 00:09:12.882 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:12.882 "strip_size_kb": 64, 00:09:12.882 "state": "online", 00:09:12.882 "raid_level": "concat", 00:09:12.882 "superblock": true, 00:09:12.882 "num_base_bdevs": 3, 00:09:12.882 "num_base_bdevs_discovered": 3, 00:09:12.882 "num_base_bdevs_operational": 3, 00:09:12.882 "base_bdevs_list": [ 00:09:12.882 { 00:09:12.882 "name": "NewBaseBdev", 00:09:12.883 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:12.883 "is_configured": true, 00:09:12.883 "data_offset": 2048, 00:09:12.883 "data_size": 63488 00:09:12.883 }, 00:09:12.883 { 00:09:12.883 "name": "BaseBdev2", 00:09:12.883 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:12.883 "is_configured": true, 00:09:12.883 "data_offset": 2048, 00:09:12.883 "data_size": 63488 00:09:12.883 }, 00:09:12.883 { 00:09:12.883 "name": "BaseBdev3", 00:09:12.883 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:12.883 "is_configured": true, 00:09:12.883 "data_offset": 2048, 00:09:12.883 "data_size": 63488 00:09:12.883 } 00:09:12.883 ] 00:09:12.883 }' 00:09:12.883 08:46:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.883 08:46:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.451 [2024-09-28 08:46:51.224071] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.451 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.451 "name": "Existed_Raid", 00:09:13.451 "aliases": [ 00:09:13.451 "6e0a5366-9800-424c-91ea-5b7bada77a3b" 00:09:13.451 ], 00:09:13.451 "product_name": "Raid Volume", 00:09:13.451 "block_size": 512, 00:09:13.451 "num_blocks": 190464, 00:09:13.451 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:13.451 "assigned_rate_limits": { 00:09:13.451 "rw_ios_per_sec": 0, 00:09:13.451 "rw_mbytes_per_sec": 0, 00:09:13.451 "r_mbytes_per_sec": 0, 00:09:13.451 "w_mbytes_per_sec": 0 00:09:13.451 }, 00:09:13.451 "claimed": false, 00:09:13.451 "zoned": false, 00:09:13.451 "supported_io_types": { 00:09:13.451 "read": true, 00:09:13.451 "write": true, 00:09:13.451 "unmap": true, 00:09:13.451 "flush": true, 00:09:13.451 "reset": true, 00:09:13.451 "nvme_admin": false, 00:09:13.451 "nvme_io": false, 00:09:13.451 "nvme_io_md": false, 00:09:13.451 "write_zeroes": true, 00:09:13.451 "zcopy": false, 00:09:13.451 "get_zone_info": false, 00:09:13.451 "zone_management": false, 00:09:13.451 "zone_append": false, 00:09:13.451 "compare": false, 00:09:13.451 "compare_and_write": false, 00:09:13.451 "abort": false, 00:09:13.451 "seek_hole": false, 00:09:13.451 "seek_data": false, 00:09:13.451 "copy": false, 00:09:13.451 "nvme_iov_md": false 00:09:13.451 }, 00:09:13.451 "memory_domains": [ 00:09:13.451 { 00:09:13.451 "dma_device_id": "system", 00:09:13.451 "dma_device_type": 1 00:09:13.451 }, 00:09:13.451 { 00:09:13.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.452 "dma_device_type": 2 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "dma_device_id": "system", 00:09:13.452 "dma_device_type": 1 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.452 "dma_device_type": 2 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "dma_device_id": "system", 00:09:13.452 "dma_device_type": 1 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.452 "dma_device_type": 2 00:09:13.452 } 00:09:13.452 ], 00:09:13.452 "driver_specific": { 00:09:13.452 "raid": { 00:09:13.452 "uuid": "6e0a5366-9800-424c-91ea-5b7bada77a3b", 00:09:13.452 "strip_size_kb": 64, 00:09:13.452 "state": "online", 00:09:13.452 "raid_level": "concat", 00:09:13.452 "superblock": true, 00:09:13.452 "num_base_bdevs": 3, 00:09:13.452 "num_base_bdevs_discovered": 3, 00:09:13.452 "num_base_bdevs_operational": 3, 00:09:13.452 "base_bdevs_list": [ 00:09:13.452 { 00:09:13.452 "name": "NewBaseBdev", 00:09:13.452 "uuid": "e15eb264-6bc8-462d-bb4d-90cd8ebd83f1", 00:09:13.452 "is_configured": true, 00:09:13.452 "data_offset": 2048, 00:09:13.452 "data_size": 63488 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "name": "BaseBdev2", 00:09:13.452 "uuid": "a18b2712-95b1-474e-ae98-a34c25db941c", 00:09:13.452 "is_configured": true, 00:09:13.452 "data_offset": 2048, 00:09:13.452 "data_size": 63488 00:09:13.452 }, 00:09:13.452 { 00:09:13.452 "name": "BaseBdev3", 00:09:13.452 "uuid": "d22d63a0-d45c-4014-b995-bf2ec7bfe2ae", 00:09:13.452 "is_configured": true, 00:09:13.452 "data_offset": 2048, 00:09:13.452 "data_size": 63488 00:09:13.452 } 00:09:13.452 ] 00:09:13.452 } 00:09:13.452 } 00:09:13.452 }' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.452 BaseBdev2 00:09:13.452 BaseBdev3' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.452 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.712 [2024-09-28 08:46:51.499275] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.712 [2024-09-28 08:46:51.499304] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.712 [2024-09-28 08:46:51.499387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.712 [2024-09-28 08:46:51.499448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.712 [2024-09-28 08:46:51.499461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66238 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66238 ']' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66238 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66238 00:09:13.712 killing process with pid 66238 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66238' 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66238 00:09:13.712 [2024-09-28 08:46:51.539127] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.712 08:46:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66238 00:09:13.971 [2024-09-28 08:46:51.849298] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.352 ************************************ 00:09:15.352 END TEST raid_state_function_test_sb 00:09:15.352 ************************************ 00:09:15.352 08:46:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:15.352 00:09:15.352 real 0m10.934s 00:09:15.352 user 0m17.101s 00:09:15.352 sys 0m1.984s 00:09:15.352 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.352 08:46:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.352 08:46:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:15.352 08:46:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:15.352 08:46:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.352 08:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.352 ************************************ 00:09:15.352 START TEST raid_superblock_test 00:09:15.352 ************************************ 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66864 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66864 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66864 ']' 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.352 08:46:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.352 [2024-09-28 08:46:53.335009] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:15.352 [2024-09-28 08:46:53.335543] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66864 ] 00:09:15.612 [2024-09-28 08:46:53.498061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.872 [2024-09-28 08:46:53.745282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.131 [2024-09-28 08:46:53.979265] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.131 [2024-09-28 08:46:53.979296] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.391 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.391 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:16.391 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:16.391 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.391 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 malloc1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 [2024-09-28 08:46:54.209172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.392 [2024-09-28 08:46:54.209240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.392 [2024-09-28 08:46:54.209261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:16.392 [2024-09-28 08:46:54.209274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.392 [2024-09-28 08:46:54.211536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.392 [2024-09-28 08:46:54.211575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.392 pt1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 malloc2 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 [2024-09-28 08:46:54.299519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.392 [2024-09-28 08:46:54.299574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.392 [2024-09-28 08:46:54.299597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:16.392 [2024-09-28 08:46:54.299607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.392 [2024-09-28 08:46:54.301892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.392 [2024-09-28 08:46:54.301925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.392 pt2 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 malloc3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 [2024-09-28 08:46:54.363176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.392 [2024-09-28 08:46:54.363225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.392 [2024-09-28 08:46:54.363244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:16.392 [2024-09-28 08:46:54.363253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.392 [2024-09-28 08:46:54.365479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.392 [2024-09-28 08:46:54.365514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.392 pt3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.392 [2024-09-28 08:46:54.375235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.392 [2024-09-28 08:46:54.377206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.392 [2024-09-28 08:46:54.377270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.392 [2024-09-28 08:46:54.377435] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:16.392 [2024-09-28 08:46:54.377456] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:16.392 [2024-09-28 08:46:54.377704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.392 [2024-09-28 08:46:54.377870] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:16.392 [2024-09-28 08:46:54.377885] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:16.392 [2024-09-28 08:46:54.378032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.392 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.652 "name": "raid_bdev1", 00:09:16.652 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:16.652 "strip_size_kb": 64, 00:09:16.652 "state": "online", 00:09:16.652 "raid_level": "concat", 00:09:16.652 "superblock": true, 00:09:16.652 "num_base_bdevs": 3, 00:09:16.652 "num_base_bdevs_discovered": 3, 00:09:16.652 "num_base_bdevs_operational": 3, 00:09:16.652 "base_bdevs_list": [ 00:09:16.652 { 00:09:16.652 "name": "pt1", 00:09:16.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.652 "is_configured": true, 00:09:16.652 "data_offset": 2048, 00:09:16.652 "data_size": 63488 00:09:16.652 }, 00:09:16.652 { 00:09:16.652 "name": "pt2", 00:09:16.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.652 "is_configured": true, 00:09:16.652 "data_offset": 2048, 00:09:16.652 "data_size": 63488 00:09:16.652 }, 00:09:16.652 { 00:09:16.652 "name": "pt3", 00:09:16.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.652 "is_configured": true, 00:09:16.652 "data_offset": 2048, 00:09:16.652 "data_size": 63488 00:09:16.652 } 00:09:16.652 ] 00:09:16.652 }' 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.652 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.913 [2024-09-28 08:46:54.846683] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.913 "name": "raid_bdev1", 00:09:16.913 "aliases": [ 00:09:16.913 "0855b562-2787-44c9-ad8e-d73ef861f0f9" 00:09:16.913 ], 00:09:16.913 "product_name": "Raid Volume", 00:09:16.913 "block_size": 512, 00:09:16.913 "num_blocks": 190464, 00:09:16.913 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:16.913 "assigned_rate_limits": { 00:09:16.913 "rw_ios_per_sec": 0, 00:09:16.913 "rw_mbytes_per_sec": 0, 00:09:16.913 "r_mbytes_per_sec": 0, 00:09:16.913 "w_mbytes_per_sec": 0 00:09:16.913 }, 00:09:16.913 "claimed": false, 00:09:16.913 "zoned": false, 00:09:16.913 "supported_io_types": { 00:09:16.913 "read": true, 00:09:16.913 "write": true, 00:09:16.913 "unmap": true, 00:09:16.913 "flush": true, 00:09:16.913 "reset": true, 00:09:16.913 "nvme_admin": false, 00:09:16.913 "nvme_io": false, 00:09:16.913 "nvme_io_md": false, 00:09:16.913 "write_zeroes": true, 00:09:16.913 "zcopy": false, 00:09:16.913 "get_zone_info": false, 00:09:16.913 "zone_management": false, 00:09:16.913 "zone_append": false, 00:09:16.913 "compare": false, 00:09:16.913 "compare_and_write": false, 00:09:16.913 "abort": false, 00:09:16.913 "seek_hole": false, 00:09:16.913 "seek_data": false, 00:09:16.913 "copy": false, 00:09:16.913 "nvme_iov_md": false 00:09:16.913 }, 00:09:16.913 "memory_domains": [ 00:09:16.913 { 00:09:16.913 "dma_device_id": "system", 00:09:16.913 "dma_device_type": 1 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.913 "dma_device_type": 2 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "dma_device_id": "system", 00:09:16.913 "dma_device_type": 1 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.913 "dma_device_type": 2 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "dma_device_id": "system", 00:09:16.913 "dma_device_type": 1 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.913 "dma_device_type": 2 00:09:16.913 } 00:09:16.913 ], 00:09:16.913 "driver_specific": { 00:09:16.913 "raid": { 00:09:16.913 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:16.913 "strip_size_kb": 64, 00:09:16.913 "state": "online", 00:09:16.913 "raid_level": "concat", 00:09:16.913 "superblock": true, 00:09:16.913 "num_base_bdevs": 3, 00:09:16.913 "num_base_bdevs_discovered": 3, 00:09:16.913 "num_base_bdevs_operational": 3, 00:09:16.913 "base_bdevs_list": [ 00:09:16.913 { 00:09:16.913 "name": "pt1", 00:09:16.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.913 "is_configured": true, 00:09:16.913 "data_offset": 2048, 00:09:16.913 "data_size": 63488 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "name": "pt2", 00:09:16.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.913 "is_configured": true, 00:09:16.913 "data_offset": 2048, 00:09:16.913 "data_size": 63488 00:09:16.913 }, 00:09:16.913 { 00:09:16.913 "name": "pt3", 00:09:16.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.913 "is_configured": true, 00:09:16.913 "data_offset": 2048, 00:09:16.913 "data_size": 63488 00:09:16.913 } 00:09:16.913 ] 00:09:16.913 } 00:09:16.913 } 00:09:16.913 }' 00:09:16.913 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.173 pt2 00:09:17.173 pt3' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.173 08:46:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.173 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:17.174 [2024-09-28 08:46:55.102127] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0855b562-2787-44c9-ad8e-d73ef861f0f9 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0855b562-2787-44c9-ad8e-d73ef861f0f9 ']' 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.174 [2024-09-28 08:46:55.153789] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.174 [2024-09-28 08:46:55.153822] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.174 [2024-09-28 08:46:55.153891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.174 [2024-09-28 08:46:55.153955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.174 [2024-09-28 08:46:55.153966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:17.174 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:17.434 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 [2024-09-28 08:46:55.305580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:17.435 [2024-09-28 08:46:55.307732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:17.435 [2024-09-28 08:46:55.307788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:17.435 [2024-09-28 08:46:55.307837] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:17.435 [2024-09-28 08:46:55.307878] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:17.435 [2024-09-28 08:46:55.307897] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:17.435 [2024-09-28 08:46:55.307912] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.435 [2024-09-28 08:46:55.307922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:17.435 request: 00:09:17.435 { 00:09:17.435 "name": "raid_bdev1", 00:09:17.435 "raid_level": "concat", 00:09:17.435 "base_bdevs": [ 00:09:17.435 "malloc1", 00:09:17.435 "malloc2", 00:09:17.435 "malloc3" 00:09:17.435 ], 00:09:17.435 "strip_size_kb": 64, 00:09:17.435 "superblock": false, 00:09:17.435 "method": "bdev_raid_create", 00:09:17.435 "req_id": 1 00:09:17.435 } 00:09:17.435 Got JSON-RPC error response 00:09:17.435 response: 00:09:17.435 { 00:09:17.435 "code": -17, 00:09:17.435 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:17.435 } 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 [2024-09-28 08:46:55.361446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:17.435 [2024-09-28 08:46:55.361491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.435 [2024-09-28 08:46:55.361509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.435 [2024-09-28 08:46:55.361517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.435 [2024-09-28 08:46:55.363909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.435 [2024-09-28 08:46:55.363942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:17.435 [2024-09-28 08:46:55.364013] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:17.435 [2024-09-28 08:46:55.364067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:17.435 pt1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.435 "name": "raid_bdev1", 00:09:17.435 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:17.435 "strip_size_kb": 64, 00:09:17.435 "state": "configuring", 00:09:17.435 "raid_level": "concat", 00:09:17.435 "superblock": true, 00:09:17.435 "num_base_bdevs": 3, 00:09:17.435 "num_base_bdevs_discovered": 1, 00:09:17.435 "num_base_bdevs_operational": 3, 00:09:17.435 "base_bdevs_list": [ 00:09:17.435 { 00:09:17.435 "name": "pt1", 00:09:17.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.435 "is_configured": true, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 }, 00:09:17.435 { 00:09:17.435 "name": null, 00:09:17.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.435 "is_configured": false, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 }, 00:09:17.435 { 00:09:17.435 "name": null, 00:09:17.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.435 "is_configured": false, 00:09:17.435 "data_offset": 2048, 00:09:17.435 "data_size": 63488 00:09:17.435 } 00:09:17.435 ] 00:09:17.435 }' 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.435 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.005 [2024-09-28 08:46:55.804720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.005 [2024-09-28 08:46:55.804785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.005 [2024-09-28 08:46:55.804810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:18.005 [2024-09-28 08:46:55.804819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.005 [2024-09-28 08:46:55.805290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.005 [2024-09-28 08:46:55.805320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.005 [2024-09-28 08:46:55.805412] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.005 [2024-09-28 08:46:55.805438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.005 pt2 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.005 [2024-09-28 08:46:55.816708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:18.005 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.006 "name": "raid_bdev1", 00:09:18.006 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:18.006 "strip_size_kb": 64, 00:09:18.006 "state": "configuring", 00:09:18.006 "raid_level": "concat", 00:09:18.006 "superblock": true, 00:09:18.006 "num_base_bdevs": 3, 00:09:18.006 "num_base_bdevs_discovered": 1, 00:09:18.006 "num_base_bdevs_operational": 3, 00:09:18.006 "base_bdevs_list": [ 00:09:18.006 { 00:09:18.006 "name": "pt1", 00:09:18.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.006 "is_configured": true, 00:09:18.006 "data_offset": 2048, 00:09:18.006 "data_size": 63488 00:09:18.006 }, 00:09:18.006 { 00:09:18.006 "name": null, 00:09:18.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.006 "is_configured": false, 00:09:18.006 "data_offset": 0, 00:09:18.006 "data_size": 63488 00:09:18.006 }, 00:09:18.006 { 00:09:18.006 "name": null, 00:09:18.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.006 "is_configured": false, 00:09:18.006 "data_offset": 2048, 00:09:18.006 "data_size": 63488 00:09:18.006 } 00:09:18.006 ] 00:09:18.006 }' 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.006 08:46:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 [2024-09-28 08:46:56.243951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.271 [2024-09-28 08:46:56.244014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.271 [2024-09-28 08:46:56.244030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:18.271 [2024-09-28 08:46:56.244042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.271 [2024-09-28 08:46:56.244505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.271 [2024-09-28 08:46:56.244534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.271 [2024-09-28 08:46:56.244615] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:18.271 [2024-09-28 08:46:56.244669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.271 pt2 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.271 [2024-09-28 08:46:56.255946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.271 [2024-09-28 08:46:56.255994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.271 [2024-09-28 08:46:56.256008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:18.271 [2024-09-28 08:46:56.256019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.271 [2024-09-28 08:46:56.256398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.271 [2024-09-28 08:46:56.256430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.271 [2024-09-28 08:46:56.256488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.271 [2024-09-28 08:46:56.256509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.271 [2024-09-28 08:46:56.256626] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.271 [2024-09-28 08:46:56.256644] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.271 [2024-09-28 08:46:56.256930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:18.271 [2024-09-28 08:46:56.257077] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.271 [2024-09-28 08:46:56.257090] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:18.271 [2024-09-28 08:46:56.257221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.271 pt3 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.271 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.531 "name": "raid_bdev1", 00:09:18.531 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:18.531 "strip_size_kb": 64, 00:09:18.531 "state": "online", 00:09:18.531 "raid_level": "concat", 00:09:18.531 "superblock": true, 00:09:18.531 "num_base_bdevs": 3, 00:09:18.531 "num_base_bdevs_discovered": 3, 00:09:18.531 "num_base_bdevs_operational": 3, 00:09:18.531 "base_bdevs_list": [ 00:09:18.531 { 00:09:18.531 "name": "pt1", 00:09:18.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.531 "is_configured": true, 00:09:18.531 "data_offset": 2048, 00:09:18.531 "data_size": 63488 00:09:18.531 }, 00:09:18.531 { 00:09:18.531 "name": "pt2", 00:09:18.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.531 "is_configured": true, 00:09:18.531 "data_offset": 2048, 00:09:18.531 "data_size": 63488 00:09:18.531 }, 00:09:18.531 { 00:09:18.531 "name": "pt3", 00:09:18.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.531 "is_configured": true, 00:09:18.531 "data_offset": 2048, 00:09:18.531 "data_size": 63488 00:09:18.531 } 00:09:18.531 ] 00:09:18.531 }' 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.531 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.791 [2024-09-28 08:46:56.699521] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.791 "name": "raid_bdev1", 00:09:18.791 "aliases": [ 00:09:18.791 "0855b562-2787-44c9-ad8e-d73ef861f0f9" 00:09:18.791 ], 00:09:18.791 "product_name": "Raid Volume", 00:09:18.791 "block_size": 512, 00:09:18.791 "num_blocks": 190464, 00:09:18.791 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:18.791 "assigned_rate_limits": { 00:09:18.791 "rw_ios_per_sec": 0, 00:09:18.791 "rw_mbytes_per_sec": 0, 00:09:18.791 "r_mbytes_per_sec": 0, 00:09:18.791 "w_mbytes_per_sec": 0 00:09:18.791 }, 00:09:18.791 "claimed": false, 00:09:18.791 "zoned": false, 00:09:18.791 "supported_io_types": { 00:09:18.791 "read": true, 00:09:18.791 "write": true, 00:09:18.791 "unmap": true, 00:09:18.791 "flush": true, 00:09:18.791 "reset": true, 00:09:18.791 "nvme_admin": false, 00:09:18.791 "nvme_io": false, 00:09:18.791 "nvme_io_md": false, 00:09:18.791 "write_zeroes": true, 00:09:18.791 "zcopy": false, 00:09:18.791 "get_zone_info": false, 00:09:18.791 "zone_management": false, 00:09:18.791 "zone_append": false, 00:09:18.791 "compare": false, 00:09:18.791 "compare_and_write": false, 00:09:18.791 "abort": false, 00:09:18.791 "seek_hole": false, 00:09:18.791 "seek_data": false, 00:09:18.791 "copy": false, 00:09:18.791 "nvme_iov_md": false 00:09:18.791 }, 00:09:18.791 "memory_domains": [ 00:09:18.791 { 00:09:18.791 "dma_device_id": "system", 00:09:18.791 "dma_device_type": 1 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.791 "dma_device_type": 2 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "dma_device_id": "system", 00:09:18.791 "dma_device_type": 1 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.791 "dma_device_type": 2 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "dma_device_id": "system", 00:09:18.791 "dma_device_type": 1 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.791 "dma_device_type": 2 00:09:18.791 } 00:09:18.791 ], 00:09:18.791 "driver_specific": { 00:09:18.791 "raid": { 00:09:18.791 "uuid": "0855b562-2787-44c9-ad8e-d73ef861f0f9", 00:09:18.791 "strip_size_kb": 64, 00:09:18.791 "state": "online", 00:09:18.791 "raid_level": "concat", 00:09:18.791 "superblock": true, 00:09:18.791 "num_base_bdevs": 3, 00:09:18.791 "num_base_bdevs_discovered": 3, 00:09:18.791 "num_base_bdevs_operational": 3, 00:09:18.791 "base_bdevs_list": [ 00:09:18.791 { 00:09:18.791 "name": "pt1", 00:09:18.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.791 "is_configured": true, 00:09:18.791 "data_offset": 2048, 00:09:18.791 "data_size": 63488 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "name": "pt2", 00:09:18.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.791 "is_configured": true, 00:09:18.791 "data_offset": 2048, 00:09:18.791 "data_size": 63488 00:09:18.791 }, 00:09:18.791 { 00:09:18.791 "name": "pt3", 00:09:18.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.791 "is_configured": true, 00:09:18.791 "data_offset": 2048, 00:09:18.791 "data_size": 63488 00:09:18.791 } 00:09:18.791 ] 00:09:18.791 } 00:09:18.791 } 00:09:18.791 }' 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.791 pt2 00:09:18.791 pt3' 00:09:18.791 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.051 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.051 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:19.052 [2024-09-28 08:46:56.958998] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0855b562-2787-44c9-ad8e-d73ef861f0f9 '!=' 0855b562-2787-44c9-ad8e-d73ef861f0f9 ']' 00:09:19.052 08:46:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66864 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66864 ']' 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66864 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66864 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.052 killing process with pid 66864 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66864' 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66864 00:09:19.052 [2024-09-28 08:46:57.034068] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.052 [2024-09-28 08:46:57.034168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.052 [2024-09-28 08:46:57.034235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.052 [2024-09-28 08:46:57.034250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:19.052 08:46:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66864 00:09:19.621 [2024-09-28 08:46:57.345004] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.002 08:46:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.002 00:09:21.002 real 0m5.424s 00:09:21.002 user 0m7.577s 00:09:21.002 sys 0m0.972s 00:09:21.002 08:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.002 08:46:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.002 ************************************ 00:09:21.002 END TEST raid_superblock_test 00:09:21.002 ************************************ 00:09:21.002 08:46:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:21.003 08:46:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.003 08:46:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.003 08:46:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.003 ************************************ 00:09:21.003 START TEST raid_read_error_test 00:09:21.003 ************************************ 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fI1JlynzgX 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67117 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67117 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67117 ']' 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.003 08:46:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.003 [2024-09-28 08:46:58.855480] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:21.003 [2024-09-28 08:46:58.855628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67117 ] 00:09:21.262 [2024-09-28 08:46:59.025455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.521 [2024-09-28 08:46:59.269287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.521 [2024-09-28 08:46:59.496639] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.521 [2024-09-28 08:46:59.496701] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.780 BaseBdev1_malloc 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.780 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.781 true 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.781 [2024-09-28 08:46:59.742462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:21.781 [2024-09-28 08:46:59.742520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.781 [2024-09-28 08:46:59.742539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:21.781 [2024-09-28 08:46:59.742550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.781 [2024-09-28 08:46:59.744985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.781 [2024-09-28 08:46:59.745024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:21.781 BaseBdev1 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.781 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.043 BaseBdev2_malloc 00:09:22.043 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 true 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 [2024-09-28 08:46:59.844244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.044 [2024-09-28 08:46:59.844298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.044 [2024-09-28 08:46:59.844317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.044 [2024-09-28 08:46:59.844339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.044 [2024-09-28 08:46:59.846596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.044 [2024-09-28 08:46:59.846634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.044 BaseBdev2 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 BaseBdev3_malloc 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 true 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 [2024-09-28 08:46:59.915705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.044 [2024-09-28 08:46:59.915756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.044 [2024-09-28 08:46:59.915776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.044 [2024-09-28 08:46:59.915788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.044 [2024-09-28 08:46:59.918111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.044 [2024-09-28 08:46:59.918150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.044 BaseBdev3 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 [2024-09-28 08:46:59.927777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.044 [2024-09-28 08:46:59.929828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.044 [2024-09-28 08:46:59.929910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.044 [2024-09-28 08:46:59.930104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:22.044 [2024-09-28 08:46:59.930123] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.044 [2024-09-28 08:46:59.930367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.044 [2024-09-28 08:46:59.930534] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:22.044 [2024-09-28 08:46:59.930550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:22.044 [2024-09-28 08:46:59.930702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.044 "name": "raid_bdev1", 00:09:22.044 "uuid": "9e04959a-02ac-4828-9fc8-0f97afb0559f", 00:09:22.044 "strip_size_kb": 64, 00:09:22.044 "state": "online", 00:09:22.044 "raid_level": "concat", 00:09:22.044 "superblock": true, 00:09:22.044 "num_base_bdevs": 3, 00:09:22.044 "num_base_bdevs_discovered": 3, 00:09:22.044 "num_base_bdevs_operational": 3, 00:09:22.044 "base_bdevs_list": [ 00:09:22.044 { 00:09:22.044 "name": "BaseBdev1", 00:09:22.044 "uuid": "3a65b5f4-ca1f-5953-bdbd-27661dee115b", 00:09:22.044 "is_configured": true, 00:09:22.044 "data_offset": 2048, 00:09:22.044 "data_size": 63488 00:09:22.044 }, 00:09:22.044 { 00:09:22.044 "name": "BaseBdev2", 00:09:22.044 "uuid": "4e5dab4b-8d9e-5e55-b401-5779fc7a9462", 00:09:22.044 "is_configured": true, 00:09:22.044 "data_offset": 2048, 00:09:22.044 "data_size": 63488 00:09:22.044 }, 00:09:22.044 { 00:09:22.044 "name": "BaseBdev3", 00:09:22.044 "uuid": "6aa0f496-5590-5f7d-a92b-cd972c66d3f0", 00:09:22.044 "is_configured": true, 00:09:22.044 "data_offset": 2048, 00:09:22.044 "data_size": 63488 00:09:22.044 } 00:09:22.044 ] 00:09:22.044 }' 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.044 08:46:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.628 08:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:22.628 08:47:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.628 [2024-09-28 08:47:00.440204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.565 "name": "raid_bdev1", 00:09:23.565 "uuid": "9e04959a-02ac-4828-9fc8-0f97afb0559f", 00:09:23.565 "strip_size_kb": 64, 00:09:23.565 "state": "online", 00:09:23.565 "raid_level": "concat", 00:09:23.565 "superblock": true, 00:09:23.565 "num_base_bdevs": 3, 00:09:23.565 "num_base_bdevs_discovered": 3, 00:09:23.565 "num_base_bdevs_operational": 3, 00:09:23.565 "base_bdevs_list": [ 00:09:23.565 { 00:09:23.565 "name": "BaseBdev1", 00:09:23.565 "uuid": "3a65b5f4-ca1f-5953-bdbd-27661dee115b", 00:09:23.565 "is_configured": true, 00:09:23.565 "data_offset": 2048, 00:09:23.565 "data_size": 63488 00:09:23.565 }, 00:09:23.565 { 00:09:23.565 "name": "BaseBdev2", 00:09:23.565 "uuid": "4e5dab4b-8d9e-5e55-b401-5779fc7a9462", 00:09:23.565 "is_configured": true, 00:09:23.565 "data_offset": 2048, 00:09:23.565 "data_size": 63488 00:09:23.565 }, 00:09:23.565 { 00:09:23.565 "name": "BaseBdev3", 00:09:23.565 "uuid": "6aa0f496-5590-5f7d-a92b-cd972c66d3f0", 00:09:23.565 "is_configured": true, 00:09:23.565 "data_offset": 2048, 00:09:23.565 "data_size": 63488 00:09:23.565 } 00:09:23.565 ] 00:09:23.565 }' 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.565 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.824 [2024-09-28 08:47:01.756032] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:23.824 [2024-09-28 08:47:01.756069] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.824 [2024-09-28 08:47:01.758680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.824 [2024-09-28 08:47:01.758731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.824 [2024-09-28 08:47:01.758773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.824 [2024-09-28 08:47:01.758782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:23.824 { 00:09:23.824 "results": [ 00:09:23.824 { 00:09:23.824 "job": "raid_bdev1", 00:09:23.824 "core_mask": "0x1", 00:09:23.824 "workload": "randrw", 00:09:23.824 "percentage": 50, 00:09:23.824 "status": "finished", 00:09:23.824 "queue_depth": 1, 00:09:23.824 "io_size": 131072, 00:09:23.824 "runtime": 1.316383, 00:09:23.824 "iops": 14593.777039053224, 00:09:23.824 "mibps": 1824.222129881653, 00:09:23.824 "io_failed": 1, 00:09:23.824 "io_timeout": 0, 00:09:23.824 "avg_latency_us": 96.42851820232441, 00:09:23.824 "min_latency_us": 25.3764192139738, 00:09:23.824 "max_latency_us": 1380.8349344978167 00:09:23.824 } 00:09:23.824 ], 00:09:23.824 "core_count": 1 00:09:23.824 } 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67117 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67117 ']' 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67117 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67117 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.824 killing process with pid 67117 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67117' 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67117 00:09:23.824 [2024-09-28 08:47:01.803342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:23.824 08:47:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67117 00:09:24.083 [2024-09-28 08:47:02.041415] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fI1JlynzgX 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:25.460 00:09:25.460 real 0m4.692s 00:09:25.460 user 0m5.298s 00:09:25.460 sys 0m0.715s 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.460 08:47:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.460 ************************************ 00:09:25.460 END TEST raid_read_error_test 00:09:25.460 ************************************ 00:09:25.720 08:47:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:25.720 08:47:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:25.720 08:47:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.720 08:47:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.720 ************************************ 00:09:25.720 START TEST raid_write_error_test 00:09:25.720 ************************************ 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.f9eqqtjsHi 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67268 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67268 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67268 ']' 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.720 08:47:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.720 [2024-09-28 08:47:03.620290] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:25.720 [2024-09-28 08:47:03.620435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67268 ] 00:09:25.980 [2024-09-28 08:47:03.788227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.239 [2024-09-28 08:47:04.025846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.499 [2024-09-28 08:47:04.252537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.499 [2024-09-28 08:47:04.252572] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.499 BaseBdev1_malloc 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.499 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 true 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 [2024-09-28 08:47:04.509377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:26.759 [2024-09-28 08:47:04.509475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.759 [2024-09-28 08:47:04.509496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:26.759 [2024-09-28 08:47:04.509508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.759 [2024-09-28 08:47:04.511955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.759 [2024-09-28 08:47:04.511994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:26.759 BaseBdev1 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 BaseBdev2_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 true 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 [2024-09-28 08:47:04.593153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:26.759 [2024-09-28 08:47:04.593207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.759 [2024-09-28 08:47:04.593222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:26.759 [2024-09-28 08:47:04.593234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.759 [2024-09-28 08:47:04.595558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.759 [2024-09-28 08:47:04.595596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:26.759 BaseBdev2 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 BaseBdev3_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 true 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 [2024-09-28 08:47:04.661539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:26.759 [2024-09-28 08:47:04.661589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.759 [2024-09-28 08:47:04.661605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:26.759 [2024-09-28 08:47:04.661615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.759 [2024-09-28 08:47:04.663974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.759 [2024-09-28 08:47:04.664011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:26.759 BaseBdev3 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.759 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.759 [2024-09-28 08:47:04.673599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.759 [2024-09-28 08:47:04.675633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.759 [2024-09-28 08:47:04.675802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.759 [2024-09-28 08:47:04.676010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.759 [2024-09-28 08:47:04.676023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.759 [2024-09-28 08:47:04.676270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:26.759 [2024-09-28 08:47:04.676422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.759 [2024-09-28 08:47:04.676433] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:26.760 [2024-09-28 08:47:04.676575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.760 "name": "raid_bdev1", 00:09:26.760 "uuid": "fcc86d9f-cad3-43bf-a7fc-abcb15105e86", 00:09:26.760 "strip_size_kb": 64, 00:09:26.760 "state": "online", 00:09:26.760 "raid_level": "concat", 00:09:26.760 "superblock": true, 00:09:26.760 "num_base_bdevs": 3, 00:09:26.760 "num_base_bdevs_discovered": 3, 00:09:26.760 "num_base_bdevs_operational": 3, 00:09:26.760 "base_bdevs_list": [ 00:09:26.760 { 00:09:26.760 "name": "BaseBdev1", 00:09:26.760 "uuid": "db489ab9-dbd4-5b51-8008-80efbd61f7a6", 00:09:26.760 "is_configured": true, 00:09:26.760 "data_offset": 2048, 00:09:26.760 "data_size": 63488 00:09:26.760 }, 00:09:26.760 { 00:09:26.760 "name": "BaseBdev2", 00:09:26.760 "uuid": "49a9b304-c218-510c-9d05-c878b4023a54", 00:09:26.760 "is_configured": true, 00:09:26.760 "data_offset": 2048, 00:09:26.760 "data_size": 63488 00:09:26.760 }, 00:09:26.760 { 00:09:26.760 "name": "BaseBdev3", 00:09:26.760 "uuid": "9866d508-d504-58c9-b752-085f9417d413", 00:09:26.760 "is_configured": true, 00:09:26.760 "data_offset": 2048, 00:09:26.760 "data_size": 63488 00:09:26.760 } 00:09:26.760 ] 00:09:26.760 }' 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.760 08:47:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.329 08:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.329 08:47:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.329 [2024-09-28 08:47:05.186259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.269 "name": "raid_bdev1", 00:09:28.269 "uuid": "fcc86d9f-cad3-43bf-a7fc-abcb15105e86", 00:09:28.269 "strip_size_kb": 64, 00:09:28.269 "state": "online", 00:09:28.269 "raid_level": "concat", 00:09:28.269 "superblock": true, 00:09:28.269 "num_base_bdevs": 3, 00:09:28.269 "num_base_bdevs_discovered": 3, 00:09:28.269 "num_base_bdevs_operational": 3, 00:09:28.269 "base_bdevs_list": [ 00:09:28.269 { 00:09:28.269 "name": "BaseBdev1", 00:09:28.269 "uuid": "db489ab9-dbd4-5b51-8008-80efbd61f7a6", 00:09:28.269 "is_configured": true, 00:09:28.269 "data_offset": 2048, 00:09:28.269 "data_size": 63488 00:09:28.269 }, 00:09:28.269 { 00:09:28.269 "name": "BaseBdev2", 00:09:28.269 "uuid": "49a9b304-c218-510c-9d05-c878b4023a54", 00:09:28.269 "is_configured": true, 00:09:28.269 "data_offset": 2048, 00:09:28.269 "data_size": 63488 00:09:28.269 }, 00:09:28.269 { 00:09:28.269 "name": "BaseBdev3", 00:09:28.269 "uuid": "9866d508-d504-58c9-b752-085f9417d413", 00:09:28.269 "is_configured": true, 00:09:28.269 "data_offset": 2048, 00:09:28.269 "data_size": 63488 00:09:28.269 } 00:09:28.269 ] 00:09:28.269 }' 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.269 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:28.838 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.838 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.838 [2024-09-28 08:47:06.554799] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:28.838 [2024-09-28 08:47:06.554833] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.838 [2024-09-28 08:47:06.557401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.838 [2024-09-28 08:47:06.557447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.838 [2024-09-28 08:47:06.557486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.838 [2024-09-28 08:47:06.557494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:28.838 { 00:09:28.838 "results": [ 00:09:28.838 { 00:09:28.838 "job": "raid_bdev1", 00:09:28.838 "core_mask": "0x1", 00:09:28.838 "workload": "randrw", 00:09:28.838 "percentage": 50, 00:09:28.838 "status": "finished", 00:09:28.838 "queue_depth": 1, 00:09:28.838 "io_size": 131072, 00:09:28.838 "runtime": 1.369059, 00:09:28.838 "iops": 14507.775048409163, 00:09:28.838 "mibps": 1813.4718810511454, 00:09:28.838 "io_failed": 1, 00:09:28.838 "io_timeout": 0, 00:09:28.838 "avg_latency_us": 97.06892141298901, 00:09:28.838 "min_latency_us": 24.817467248908297, 00:09:28.838 "max_latency_us": 1395.1441048034935 00:09:28.838 } 00:09:28.838 ], 00:09:28.838 "core_count": 1 00:09:28.838 } 00:09:28.838 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67268 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67268 ']' 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67268 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67268 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67268' 00:09:28.839 killing process with pid 67268 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67268 00:09:28.839 [2024-09-28 08:47:06.604511] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.839 08:47:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67268 00:09:29.098 [2024-09-28 08:47:06.848803] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.f9eqqtjsHi 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:30.481 ************************************ 00:09:30.481 END TEST raid_write_error_test 00:09:30.481 ************************************ 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:30.481 00:09:30.481 real 0m4.727s 00:09:30.481 user 0m5.377s 00:09:30.481 sys 0m0.722s 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.481 08:47:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.481 08:47:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:30.481 08:47:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:30.481 08:47:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:30.481 08:47:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.481 08:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.481 ************************************ 00:09:30.481 START TEST raid_state_function_test 00:09:30.481 ************************************ 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67406 00:09:30.481 Process raid pid: 67406 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67406' 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67406 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67406 ']' 00:09:30.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.481 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.482 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.482 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.482 08:47:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.482 [2024-09-28 08:47:08.417122] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:30.482 [2024-09-28 08:47:08.417336] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.742 [2024-09-28 08:47:08.586364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.002 [2024-09-28 08:47:08.832556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.262 [2024-09-28 08:47:09.063290] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.262 [2024-09-28 08:47:09.063412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.262 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 [2024-09-28 08:47:09.244546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.263 [2024-09-28 08:47:09.244641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.263 [2024-09-28 08:47:09.244679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.263 [2024-09-28 08:47:09.244703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.263 [2024-09-28 08:47:09.244723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.263 [2024-09-28 08:47:09.244747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.263 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.523 "name": "Existed_Raid", 00:09:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.523 "strip_size_kb": 0, 00:09:31.523 "state": "configuring", 00:09:31.523 "raid_level": "raid1", 00:09:31.523 "superblock": false, 00:09:31.523 "num_base_bdevs": 3, 00:09:31.523 "num_base_bdevs_discovered": 0, 00:09:31.523 "num_base_bdevs_operational": 3, 00:09:31.523 "base_bdevs_list": [ 00:09:31.523 { 00:09:31.523 "name": "BaseBdev1", 00:09:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.523 "is_configured": false, 00:09:31.523 "data_offset": 0, 00:09:31.523 "data_size": 0 00:09:31.523 }, 00:09:31.523 { 00:09:31.523 "name": "BaseBdev2", 00:09:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.523 "is_configured": false, 00:09:31.523 "data_offset": 0, 00:09:31.523 "data_size": 0 00:09:31.523 }, 00:09:31.523 { 00:09:31.523 "name": "BaseBdev3", 00:09:31.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.523 "is_configured": false, 00:09:31.523 "data_offset": 0, 00:09:31.523 "data_size": 0 00:09:31.523 } 00:09:31.523 ] 00:09:31.523 }' 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.523 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.783 [2024-09-28 08:47:09.691733] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.783 [2024-09-28 08:47:09.691825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.783 [2024-09-28 08:47:09.699730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.783 [2024-09-28 08:47:09.699821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.783 [2024-09-28 08:47:09.699847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.783 [2024-09-28 08:47:09.699870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.783 [2024-09-28 08:47:09.699887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.783 [2024-09-28 08:47:09.699908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.783 [2024-09-28 08:47:09.760590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.783 BaseBdev1 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.783 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.044 [ 00:09:32.044 { 00:09:32.044 "name": "BaseBdev1", 00:09:32.044 "aliases": [ 00:09:32.044 "4d28bed4-84e6-4b8a-bd16-143e54af2097" 00:09:32.044 ], 00:09:32.044 "product_name": "Malloc disk", 00:09:32.044 "block_size": 512, 00:09:32.044 "num_blocks": 65536, 00:09:32.044 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:32.044 "assigned_rate_limits": { 00:09:32.044 "rw_ios_per_sec": 0, 00:09:32.044 "rw_mbytes_per_sec": 0, 00:09:32.044 "r_mbytes_per_sec": 0, 00:09:32.044 "w_mbytes_per_sec": 0 00:09:32.044 }, 00:09:32.044 "claimed": true, 00:09:32.044 "claim_type": "exclusive_write", 00:09:32.044 "zoned": false, 00:09:32.044 "supported_io_types": { 00:09:32.044 "read": true, 00:09:32.044 "write": true, 00:09:32.044 "unmap": true, 00:09:32.044 "flush": true, 00:09:32.044 "reset": true, 00:09:32.044 "nvme_admin": false, 00:09:32.044 "nvme_io": false, 00:09:32.044 "nvme_io_md": false, 00:09:32.044 "write_zeroes": true, 00:09:32.044 "zcopy": true, 00:09:32.044 "get_zone_info": false, 00:09:32.044 "zone_management": false, 00:09:32.044 "zone_append": false, 00:09:32.044 "compare": false, 00:09:32.044 "compare_and_write": false, 00:09:32.044 "abort": true, 00:09:32.044 "seek_hole": false, 00:09:32.044 "seek_data": false, 00:09:32.044 "copy": true, 00:09:32.044 "nvme_iov_md": false 00:09:32.044 }, 00:09:32.044 "memory_domains": [ 00:09:32.044 { 00:09:32.044 "dma_device_id": "system", 00:09:32.044 "dma_device_type": 1 00:09:32.044 }, 00:09:32.044 { 00:09:32.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.044 "dma_device_type": 2 00:09:32.044 } 00:09:32.044 ], 00:09:32.044 "driver_specific": {} 00:09:32.044 } 00:09:32.044 ] 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.044 "name": "Existed_Raid", 00:09:32.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.044 "strip_size_kb": 0, 00:09:32.044 "state": "configuring", 00:09:32.044 "raid_level": "raid1", 00:09:32.044 "superblock": false, 00:09:32.044 "num_base_bdevs": 3, 00:09:32.044 "num_base_bdevs_discovered": 1, 00:09:32.044 "num_base_bdevs_operational": 3, 00:09:32.044 "base_bdevs_list": [ 00:09:32.044 { 00:09:32.044 "name": "BaseBdev1", 00:09:32.044 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:32.044 "is_configured": true, 00:09:32.044 "data_offset": 0, 00:09:32.044 "data_size": 65536 00:09:32.044 }, 00:09:32.044 { 00:09:32.044 "name": "BaseBdev2", 00:09:32.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.044 "is_configured": false, 00:09:32.044 "data_offset": 0, 00:09:32.044 "data_size": 0 00:09:32.044 }, 00:09:32.044 { 00:09:32.044 "name": "BaseBdev3", 00:09:32.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.044 "is_configured": false, 00:09:32.044 "data_offset": 0, 00:09:32.044 "data_size": 0 00:09:32.044 } 00:09:32.044 ] 00:09:32.044 }' 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.044 08:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.304 [2024-09-28 08:47:10.271765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.304 [2024-09-28 08:47:10.271857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.304 [2024-09-28 08:47:10.283782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.304 [2024-09-28 08:47:10.285869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.304 [2024-09-28 08:47:10.285912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.304 [2024-09-28 08:47:10.285921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.304 [2024-09-28 08:47:10.285930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.304 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.563 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.563 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.563 "name": "Existed_Raid", 00:09:32.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.563 "strip_size_kb": 0, 00:09:32.563 "state": "configuring", 00:09:32.563 "raid_level": "raid1", 00:09:32.563 "superblock": false, 00:09:32.563 "num_base_bdevs": 3, 00:09:32.563 "num_base_bdevs_discovered": 1, 00:09:32.563 "num_base_bdevs_operational": 3, 00:09:32.563 "base_bdevs_list": [ 00:09:32.563 { 00:09:32.563 "name": "BaseBdev1", 00:09:32.563 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:32.563 "is_configured": true, 00:09:32.563 "data_offset": 0, 00:09:32.563 "data_size": 65536 00:09:32.563 }, 00:09:32.563 { 00:09:32.563 "name": "BaseBdev2", 00:09:32.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.563 "is_configured": false, 00:09:32.563 "data_offset": 0, 00:09:32.563 "data_size": 0 00:09:32.563 }, 00:09:32.563 { 00:09:32.563 "name": "BaseBdev3", 00:09:32.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.563 "is_configured": false, 00:09:32.563 "data_offset": 0, 00:09:32.563 "data_size": 0 00:09:32.563 } 00:09:32.563 ] 00:09:32.563 }' 00:09:32.563 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.563 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.824 [2024-09-28 08:47:10.713826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.824 BaseBdev2 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.824 [ 00:09:32.824 { 00:09:32.824 "name": "BaseBdev2", 00:09:32.824 "aliases": [ 00:09:32.824 "9e2779a1-8c62-41cb-8acb-6c55cf15456f" 00:09:32.824 ], 00:09:32.824 "product_name": "Malloc disk", 00:09:32.824 "block_size": 512, 00:09:32.824 "num_blocks": 65536, 00:09:32.824 "uuid": "9e2779a1-8c62-41cb-8acb-6c55cf15456f", 00:09:32.824 "assigned_rate_limits": { 00:09:32.824 "rw_ios_per_sec": 0, 00:09:32.824 "rw_mbytes_per_sec": 0, 00:09:32.824 "r_mbytes_per_sec": 0, 00:09:32.824 "w_mbytes_per_sec": 0 00:09:32.824 }, 00:09:32.824 "claimed": true, 00:09:32.824 "claim_type": "exclusive_write", 00:09:32.824 "zoned": false, 00:09:32.824 "supported_io_types": { 00:09:32.824 "read": true, 00:09:32.824 "write": true, 00:09:32.824 "unmap": true, 00:09:32.824 "flush": true, 00:09:32.824 "reset": true, 00:09:32.824 "nvme_admin": false, 00:09:32.824 "nvme_io": false, 00:09:32.824 "nvme_io_md": false, 00:09:32.824 "write_zeroes": true, 00:09:32.824 "zcopy": true, 00:09:32.824 "get_zone_info": false, 00:09:32.824 "zone_management": false, 00:09:32.824 "zone_append": false, 00:09:32.824 "compare": false, 00:09:32.824 "compare_and_write": false, 00:09:32.824 "abort": true, 00:09:32.824 "seek_hole": false, 00:09:32.824 "seek_data": false, 00:09:32.824 "copy": true, 00:09:32.824 "nvme_iov_md": false 00:09:32.824 }, 00:09:32.824 "memory_domains": [ 00:09:32.824 { 00:09:32.824 "dma_device_id": "system", 00:09:32.824 "dma_device_type": 1 00:09:32.824 }, 00:09:32.824 { 00:09:32.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.824 "dma_device_type": 2 00:09:32.824 } 00:09:32.824 ], 00:09:32.824 "driver_specific": {} 00:09:32.824 } 00:09:32.824 ] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.824 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.824 "name": "Existed_Raid", 00:09:32.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.824 "strip_size_kb": 0, 00:09:32.824 "state": "configuring", 00:09:32.824 "raid_level": "raid1", 00:09:32.824 "superblock": false, 00:09:32.824 "num_base_bdevs": 3, 00:09:32.824 "num_base_bdevs_discovered": 2, 00:09:32.825 "num_base_bdevs_operational": 3, 00:09:32.825 "base_bdevs_list": [ 00:09:32.825 { 00:09:32.825 "name": "BaseBdev1", 00:09:32.825 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:32.825 "is_configured": true, 00:09:32.825 "data_offset": 0, 00:09:32.825 "data_size": 65536 00:09:32.825 }, 00:09:32.825 { 00:09:32.825 "name": "BaseBdev2", 00:09:32.825 "uuid": "9e2779a1-8c62-41cb-8acb-6c55cf15456f", 00:09:32.825 "is_configured": true, 00:09:32.825 "data_offset": 0, 00:09:32.825 "data_size": 65536 00:09:32.825 }, 00:09:32.825 { 00:09:32.825 "name": "BaseBdev3", 00:09:32.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.825 "is_configured": false, 00:09:32.825 "data_offset": 0, 00:09:32.825 "data_size": 0 00:09:32.825 } 00:09:32.825 ] 00:09:32.825 }' 00:09:32.825 08:47:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.825 08:47:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 [2024-09-28 08:47:11.252436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.394 [2024-09-28 08:47:11.252495] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.394 [2024-09-28 08:47:11.252515] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:33.394 [2024-09-28 08:47:11.252811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.394 [2024-09-28 08:47:11.253024] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.394 [2024-09-28 08:47:11.253034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:33.394 [2024-09-28 08:47:11.253341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.394 BaseBdev3 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 [ 00:09:33.394 { 00:09:33.394 "name": "BaseBdev3", 00:09:33.394 "aliases": [ 00:09:33.394 "6849995e-974a-485f-a2ce-54b4128beee9" 00:09:33.394 ], 00:09:33.394 "product_name": "Malloc disk", 00:09:33.394 "block_size": 512, 00:09:33.394 "num_blocks": 65536, 00:09:33.394 "uuid": "6849995e-974a-485f-a2ce-54b4128beee9", 00:09:33.394 "assigned_rate_limits": { 00:09:33.394 "rw_ios_per_sec": 0, 00:09:33.394 "rw_mbytes_per_sec": 0, 00:09:33.394 "r_mbytes_per_sec": 0, 00:09:33.394 "w_mbytes_per_sec": 0 00:09:33.394 }, 00:09:33.394 "claimed": true, 00:09:33.394 "claim_type": "exclusive_write", 00:09:33.394 "zoned": false, 00:09:33.394 "supported_io_types": { 00:09:33.394 "read": true, 00:09:33.394 "write": true, 00:09:33.394 "unmap": true, 00:09:33.394 "flush": true, 00:09:33.394 "reset": true, 00:09:33.394 "nvme_admin": false, 00:09:33.394 "nvme_io": false, 00:09:33.394 "nvme_io_md": false, 00:09:33.394 "write_zeroes": true, 00:09:33.394 "zcopy": true, 00:09:33.394 "get_zone_info": false, 00:09:33.394 "zone_management": false, 00:09:33.394 "zone_append": false, 00:09:33.394 "compare": false, 00:09:33.394 "compare_and_write": false, 00:09:33.394 "abort": true, 00:09:33.394 "seek_hole": false, 00:09:33.394 "seek_data": false, 00:09:33.394 "copy": true, 00:09:33.394 "nvme_iov_md": false 00:09:33.394 }, 00:09:33.394 "memory_domains": [ 00:09:33.394 { 00:09:33.394 "dma_device_id": "system", 00:09:33.394 "dma_device_type": 1 00:09:33.394 }, 00:09:33.394 { 00:09:33.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.394 "dma_device_type": 2 00:09:33.394 } 00:09:33.394 ], 00:09:33.394 "driver_specific": {} 00:09:33.394 } 00:09:33.394 ] 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.394 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.395 "name": "Existed_Raid", 00:09:33.395 "uuid": "25c44431-3560-4994-889a-b2db6241872a", 00:09:33.395 "strip_size_kb": 0, 00:09:33.395 "state": "online", 00:09:33.395 "raid_level": "raid1", 00:09:33.395 "superblock": false, 00:09:33.395 "num_base_bdevs": 3, 00:09:33.395 "num_base_bdevs_discovered": 3, 00:09:33.395 "num_base_bdevs_operational": 3, 00:09:33.395 "base_bdevs_list": [ 00:09:33.395 { 00:09:33.395 "name": "BaseBdev1", 00:09:33.395 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:33.395 "is_configured": true, 00:09:33.395 "data_offset": 0, 00:09:33.395 "data_size": 65536 00:09:33.395 }, 00:09:33.395 { 00:09:33.395 "name": "BaseBdev2", 00:09:33.395 "uuid": "9e2779a1-8c62-41cb-8acb-6c55cf15456f", 00:09:33.395 "is_configured": true, 00:09:33.395 "data_offset": 0, 00:09:33.395 "data_size": 65536 00:09:33.395 }, 00:09:33.395 { 00:09:33.395 "name": "BaseBdev3", 00:09:33.395 "uuid": "6849995e-974a-485f-a2ce-54b4128beee9", 00:09:33.395 "is_configured": true, 00:09:33.395 "data_offset": 0, 00:09:33.395 "data_size": 65536 00:09:33.395 } 00:09:33.395 ] 00:09:33.395 }' 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.395 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.964 [2024-09-28 08:47:11.704002] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.964 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.964 "name": "Existed_Raid", 00:09:33.964 "aliases": [ 00:09:33.964 "25c44431-3560-4994-889a-b2db6241872a" 00:09:33.964 ], 00:09:33.964 "product_name": "Raid Volume", 00:09:33.964 "block_size": 512, 00:09:33.964 "num_blocks": 65536, 00:09:33.964 "uuid": "25c44431-3560-4994-889a-b2db6241872a", 00:09:33.964 "assigned_rate_limits": { 00:09:33.964 "rw_ios_per_sec": 0, 00:09:33.964 "rw_mbytes_per_sec": 0, 00:09:33.964 "r_mbytes_per_sec": 0, 00:09:33.964 "w_mbytes_per_sec": 0 00:09:33.964 }, 00:09:33.964 "claimed": false, 00:09:33.964 "zoned": false, 00:09:33.964 "supported_io_types": { 00:09:33.964 "read": true, 00:09:33.964 "write": true, 00:09:33.964 "unmap": false, 00:09:33.964 "flush": false, 00:09:33.964 "reset": true, 00:09:33.964 "nvme_admin": false, 00:09:33.964 "nvme_io": false, 00:09:33.964 "nvme_io_md": false, 00:09:33.964 "write_zeroes": true, 00:09:33.964 "zcopy": false, 00:09:33.965 "get_zone_info": false, 00:09:33.965 "zone_management": false, 00:09:33.965 "zone_append": false, 00:09:33.965 "compare": false, 00:09:33.965 "compare_and_write": false, 00:09:33.965 "abort": false, 00:09:33.965 "seek_hole": false, 00:09:33.965 "seek_data": false, 00:09:33.965 "copy": false, 00:09:33.965 "nvme_iov_md": false 00:09:33.965 }, 00:09:33.965 "memory_domains": [ 00:09:33.965 { 00:09:33.965 "dma_device_id": "system", 00:09:33.965 "dma_device_type": 1 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.965 "dma_device_type": 2 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "dma_device_id": "system", 00:09:33.965 "dma_device_type": 1 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.965 "dma_device_type": 2 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "dma_device_id": "system", 00:09:33.965 "dma_device_type": 1 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.965 "dma_device_type": 2 00:09:33.965 } 00:09:33.965 ], 00:09:33.965 "driver_specific": { 00:09:33.965 "raid": { 00:09:33.965 "uuid": "25c44431-3560-4994-889a-b2db6241872a", 00:09:33.965 "strip_size_kb": 0, 00:09:33.965 "state": "online", 00:09:33.965 "raid_level": "raid1", 00:09:33.965 "superblock": false, 00:09:33.965 "num_base_bdevs": 3, 00:09:33.965 "num_base_bdevs_discovered": 3, 00:09:33.965 "num_base_bdevs_operational": 3, 00:09:33.965 "base_bdevs_list": [ 00:09:33.965 { 00:09:33.965 "name": "BaseBdev1", 00:09:33.965 "uuid": "4d28bed4-84e6-4b8a-bd16-143e54af2097", 00:09:33.965 "is_configured": true, 00:09:33.965 "data_offset": 0, 00:09:33.965 "data_size": 65536 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "name": "BaseBdev2", 00:09:33.965 "uuid": "9e2779a1-8c62-41cb-8acb-6c55cf15456f", 00:09:33.965 "is_configured": true, 00:09:33.965 "data_offset": 0, 00:09:33.965 "data_size": 65536 00:09:33.965 }, 00:09:33.965 { 00:09:33.965 "name": "BaseBdev3", 00:09:33.965 "uuid": "6849995e-974a-485f-a2ce-54b4128beee9", 00:09:33.965 "is_configured": true, 00:09:33.965 "data_offset": 0, 00:09:33.965 "data_size": 65536 00:09:33.965 } 00:09:33.965 ] 00:09:33.965 } 00:09:33.965 } 00:09:33.965 }' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.965 BaseBdev2 00:09:33.965 BaseBdev3' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.965 08:47:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 [2024-09-28 08:47:11.951334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.225 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.226 "name": "Existed_Raid", 00:09:34.226 "uuid": "25c44431-3560-4994-889a-b2db6241872a", 00:09:34.226 "strip_size_kb": 0, 00:09:34.226 "state": "online", 00:09:34.226 "raid_level": "raid1", 00:09:34.226 "superblock": false, 00:09:34.226 "num_base_bdevs": 3, 00:09:34.226 "num_base_bdevs_discovered": 2, 00:09:34.226 "num_base_bdevs_operational": 2, 00:09:34.226 "base_bdevs_list": [ 00:09:34.226 { 00:09:34.226 "name": null, 00:09:34.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.226 "is_configured": false, 00:09:34.226 "data_offset": 0, 00:09:34.226 "data_size": 65536 00:09:34.226 }, 00:09:34.226 { 00:09:34.226 "name": "BaseBdev2", 00:09:34.226 "uuid": "9e2779a1-8c62-41cb-8acb-6c55cf15456f", 00:09:34.226 "is_configured": true, 00:09:34.226 "data_offset": 0, 00:09:34.226 "data_size": 65536 00:09:34.226 }, 00:09:34.226 { 00:09:34.226 "name": "BaseBdev3", 00:09:34.226 "uuid": "6849995e-974a-485f-a2ce-54b4128beee9", 00:09:34.226 "is_configured": true, 00:09:34.226 "data_offset": 0, 00:09:34.226 "data_size": 65536 00:09:34.226 } 00:09:34.226 ] 00:09:34.226 }' 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.226 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.795 [2024-09-28 08:47:12.550371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.795 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.796 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.796 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:34.796 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.796 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.796 [2024-09-28 08:47:12.707859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.796 [2024-09-28 08:47:12.707971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.056 [2024-09-28 08:47:12.807974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.056 [2024-09-28 08:47:12.808033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.056 [2024-09-28 08:47:12.808046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 BaseBdev2 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 [ 00:09:35.056 { 00:09:35.056 "name": "BaseBdev2", 00:09:35.056 "aliases": [ 00:09:35.056 "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe" 00:09:35.056 ], 00:09:35.056 "product_name": "Malloc disk", 00:09:35.056 "block_size": 512, 00:09:35.056 "num_blocks": 65536, 00:09:35.056 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:35.056 "assigned_rate_limits": { 00:09:35.056 "rw_ios_per_sec": 0, 00:09:35.056 "rw_mbytes_per_sec": 0, 00:09:35.056 "r_mbytes_per_sec": 0, 00:09:35.056 "w_mbytes_per_sec": 0 00:09:35.056 }, 00:09:35.056 "claimed": false, 00:09:35.056 "zoned": false, 00:09:35.056 "supported_io_types": { 00:09:35.056 "read": true, 00:09:35.056 "write": true, 00:09:35.056 "unmap": true, 00:09:35.056 "flush": true, 00:09:35.056 "reset": true, 00:09:35.056 "nvme_admin": false, 00:09:35.056 "nvme_io": false, 00:09:35.056 "nvme_io_md": false, 00:09:35.056 "write_zeroes": true, 00:09:35.056 "zcopy": true, 00:09:35.056 "get_zone_info": false, 00:09:35.056 "zone_management": false, 00:09:35.056 "zone_append": false, 00:09:35.056 "compare": false, 00:09:35.056 "compare_and_write": false, 00:09:35.056 "abort": true, 00:09:35.056 "seek_hole": false, 00:09:35.056 "seek_data": false, 00:09:35.056 "copy": true, 00:09:35.056 "nvme_iov_md": false 00:09:35.056 }, 00:09:35.056 "memory_domains": [ 00:09:35.056 { 00:09:35.056 "dma_device_id": "system", 00:09:35.056 "dma_device_type": 1 00:09:35.056 }, 00:09:35.056 { 00:09:35.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.056 "dma_device_type": 2 00:09:35.056 } 00:09:35.056 ], 00:09:35.056 "driver_specific": {} 00:09:35.056 } 00:09:35.056 ] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 BaseBdev3 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.056 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.056 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.056 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.056 [ 00:09:35.056 { 00:09:35.056 "name": "BaseBdev3", 00:09:35.056 "aliases": [ 00:09:35.056 "f6d14e21-7c7f-48dc-ab52-4b65d683636d" 00:09:35.056 ], 00:09:35.056 "product_name": "Malloc disk", 00:09:35.056 "block_size": 512, 00:09:35.056 "num_blocks": 65536, 00:09:35.056 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:35.056 "assigned_rate_limits": { 00:09:35.056 "rw_ios_per_sec": 0, 00:09:35.056 "rw_mbytes_per_sec": 0, 00:09:35.056 "r_mbytes_per_sec": 0, 00:09:35.056 "w_mbytes_per_sec": 0 00:09:35.056 }, 00:09:35.056 "claimed": false, 00:09:35.056 "zoned": false, 00:09:35.056 "supported_io_types": { 00:09:35.056 "read": true, 00:09:35.056 "write": true, 00:09:35.056 "unmap": true, 00:09:35.056 "flush": true, 00:09:35.056 "reset": true, 00:09:35.056 "nvme_admin": false, 00:09:35.056 "nvme_io": false, 00:09:35.056 "nvme_io_md": false, 00:09:35.056 "write_zeroes": true, 00:09:35.056 "zcopy": true, 00:09:35.056 "get_zone_info": false, 00:09:35.056 "zone_management": false, 00:09:35.056 "zone_append": false, 00:09:35.056 "compare": false, 00:09:35.056 "compare_and_write": false, 00:09:35.056 "abort": true, 00:09:35.056 "seek_hole": false, 00:09:35.057 "seek_data": false, 00:09:35.057 "copy": true, 00:09:35.057 "nvme_iov_md": false 00:09:35.057 }, 00:09:35.057 "memory_domains": [ 00:09:35.057 { 00:09:35.057 "dma_device_id": "system", 00:09:35.057 "dma_device_type": 1 00:09:35.057 }, 00:09:35.057 { 00:09:35.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.057 "dma_device_type": 2 00:09:35.057 } 00:09:35.057 ], 00:09:35.057 "driver_specific": {} 00:09:35.057 } 00:09:35.057 ] 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.057 [2024-09-28 08:47:13.035153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.057 [2024-09-28 08:47:13.035264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.057 [2024-09-28 08:47:13.035305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.057 [2024-09-28 08:47:13.037339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.057 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.317 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.317 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.317 "name": "Existed_Raid", 00:09:35.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.317 "strip_size_kb": 0, 00:09:35.317 "state": "configuring", 00:09:35.317 "raid_level": "raid1", 00:09:35.317 "superblock": false, 00:09:35.317 "num_base_bdevs": 3, 00:09:35.317 "num_base_bdevs_discovered": 2, 00:09:35.317 "num_base_bdevs_operational": 3, 00:09:35.317 "base_bdevs_list": [ 00:09:35.317 { 00:09:35.317 "name": "BaseBdev1", 00:09:35.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.317 "is_configured": false, 00:09:35.317 "data_offset": 0, 00:09:35.317 "data_size": 0 00:09:35.317 }, 00:09:35.317 { 00:09:35.317 "name": "BaseBdev2", 00:09:35.317 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:35.317 "is_configured": true, 00:09:35.317 "data_offset": 0, 00:09:35.317 "data_size": 65536 00:09:35.317 }, 00:09:35.317 { 00:09:35.317 "name": "BaseBdev3", 00:09:35.317 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:35.317 "is_configured": true, 00:09:35.317 "data_offset": 0, 00:09:35.317 "data_size": 65536 00:09:35.317 } 00:09:35.317 ] 00:09:35.317 }' 00:09:35.317 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.317 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.577 [2024-09-28 08:47:13.478420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.577 "name": "Existed_Raid", 00:09:35.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.577 "strip_size_kb": 0, 00:09:35.577 "state": "configuring", 00:09:35.577 "raid_level": "raid1", 00:09:35.577 "superblock": false, 00:09:35.577 "num_base_bdevs": 3, 00:09:35.577 "num_base_bdevs_discovered": 1, 00:09:35.577 "num_base_bdevs_operational": 3, 00:09:35.577 "base_bdevs_list": [ 00:09:35.577 { 00:09:35.577 "name": "BaseBdev1", 00:09:35.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.577 "is_configured": false, 00:09:35.577 "data_offset": 0, 00:09:35.577 "data_size": 0 00:09:35.577 }, 00:09:35.577 { 00:09:35.577 "name": null, 00:09:35.577 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:35.577 "is_configured": false, 00:09:35.577 "data_offset": 0, 00:09:35.577 "data_size": 65536 00:09:35.577 }, 00:09:35.577 { 00:09:35.577 "name": "BaseBdev3", 00:09:35.577 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:35.577 "is_configured": true, 00:09:35.577 "data_offset": 0, 00:09:35.577 "data_size": 65536 00:09:35.577 } 00:09:35.577 ] 00:09:35.577 }' 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.577 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.163 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.164 [2024-09-28 08:47:13.938414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.164 BaseBdev1 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.164 [ 00:09:36.164 { 00:09:36.164 "name": "BaseBdev1", 00:09:36.164 "aliases": [ 00:09:36.164 "21d9b515-3156-487b-96dc-b8330c6e946a" 00:09:36.164 ], 00:09:36.164 "product_name": "Malloc disk", 00:09:36.164 "block_size": 512, 00:09:36.164 "num_blocks": 65536, 00:09:36.164 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:36.164 "assigned_rate_limits": { 00:09:36.164 "rw_ios_per_sec": 0, 00:09:36.164 "rw_mbytes_per_sec": 0, 00:09:36.164 "r_mbytes_per_sec": 0, 00:09:36.164 "w_mbytes_per_sec": 0 00:09:36.164 }, 00:09:36.164 "claimed": true, 00:09:36.164 "claim_type": "exclusive_write", 00:09:36.164 "zoned": false, 00:09:36.164 "supported_io_types": { 00:09:36.164 "read": true, 00:09:36.164 "write": true, 00:09:36.164 "unmap": true, 00:09:36.164 "flush": true, 00:09:36.164 "reset": true, 00:09:36.164 "nvme_admin": false, 00:09:36.164 "nvme_io": false, 00:09:36.164 "nvme_io_md": false, 00:09:36.164 "write_zeroes": true, 00:09:36.164 "zcopy": true, 00:09:36.164 "get_zone_info": false, 00:09:36.164 "zone_management": false, 00:09:36.164 "zone_append": false, 00:09:36.164 "compare": false, 00:09:36.164 "compare_and_write": false, 00:09:36.164 "abort": true, 00:09:36.164 "seek_hole": false, 00:09:36.164 "seek_data": false, 00:09:36.164 "copy": true, 00:09:36.164 "nvme_iov_md": false 00:09:36.164 }, 00:09:36.164 "memory_domains": [ 00:09:36.164 { 00:09:36.164 "dma_device_id": "system", 00:09:36.164 "dma_device_type": 1 00:09:36.164 }, 00:09:36.164 { 00:09:36.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.164 "dma_device_type": 2 00:09:36.164 } 00:09:36.164 ], 00:09:36.164 "driver_specific": {} 00:09:36.164 } 00:09:36.164 ] 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.164 08:47:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.164 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.164 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.164 "name": "Existed_Raid", 00:09:36.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.164 "strip_size_kb": 0, 00:09:36.164 "state": "configuring", 00:09:36.164 "raid_level": "raid1", 00:09:36.164 "superblock": false, 00:09:36.164 "num_base_bdevs": 3, 00:09:36.164 "num_base_bdevs_discovered": 2, 00:09:36.164 "num_base_bdevs_operational": 3, 00:09:36.164 "base_bdevs_list": [ 00:09:36.164 { 00:09:36.164 "name": "BaseBdev1", 00:09:36.164 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:36.164 "is_configured": true, 00:09:36.164 "data_offset": 0, 00:09:36.164 "data_size": 65536 00:09:36.164 }, 00:09:36.164 { 00:09:36.164 "name": null, 00:09:36.164 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:36.164 "is_configured": false, 00:09:36.164 "data_offset": 0, 00:09:36.164 "data_size": 65536 00:09:36.164 }, 00:09:36.164 { 00:09:36.164 "name": "BaseBdev3", 00:09:36.164 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:36.164 "is_configured": true, 00:09:36.164 "data_offset": 0, 00:09:36.164 "data_size": 65536 00:09:36.164 } 00:09:36.164 ] 00:09:36.164 }' 00:09:36.164 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.164 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.431 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.431 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.431 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.431 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.690 [2024-09-28 08:47:14.473540] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.690 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.691 "name": "Existed_Raid", 00:09:36.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.691 "strip_size_kb": 0, 00:09:36.691 "state": "configuring", 00:09:36.691 "raid_level": "raid1", 00:09:36.691 "superblock": false, 00:09:36.691 "num_base_bdevs": 3, 00:09:36.691 "num_base_bdevs_discovered": 1, 00:09:36.691 "num_base_bdevs_operational": 3, 00:09:36.691 "base_bdevs_list": [ 00:09:36.691 { 00:09:36.691 "name": "BaseBdev1", 00:09:36.691 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:36.691 "is_configured": true, 00:09:36.691 "data_offset": 0, 00:09:36.691 "data_size": 65536 00:09:36.691 }, 00:09:36.691 { 00:09:36.691 "name": null, 00:09:36.691 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:36.691 "is_configured": false, 00:09:36.691 "data_offset": 0, 00:09:36.691 "data_size": 65536 00:09:36.691 }, 00:09:36.691 { 00:09:36.691 "name": null, 00:09:36.691 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:36.691 "is_configured": false, 00:09:36.691 "data_offset": 0, 00:09:36.691 "data_size": 65536 00:09:36.691 } 00:09:36.691 ] 00:09:36.691 }' 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.691 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.950 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.950 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.950 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.950 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.950 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 [2024-09-28 08:47:14.952754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.210 08:47:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.210 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.210 "name": "Existed_Raid", 00:09:37.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.210 "strip_size_kb": 0, 00:09:37.210 "state": "configuring", 00:09:37.210 "raid_level": "raid1", 00:09:37.210 "superblock": false, 00:09:37.210 "num_base_bdevs": 3, 00:09:37.210 "num_base_bdevs_discovered": 2, 00:09:37.210 "num_base_bdevs_operational": 3, 00:09:37.210 "base_bdevs_list": [ 00:09:37.210 { 00:09:37.210 "name": "BaseBdev1", 00:09:37.210 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:37.210 "is_configured": true, 00:09:37.210 "data_offset": 0, 00:09:37.210 "data_size": 65536 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "name": null, 00:09:37.210 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:37.210 "is_configured": false, 00:09:37.210 "data_offset": 0, 00:09:37.210 "data_size": 65536 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "name": "BaseBdev3", 00:09:37.210 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:37.210 "is_configured": true, 00:09:37.210 "data_offset": 0, 00:09:37.210 "data_size": 65536 00:09:37.210 } 00:09:37.210 ] 00:09:37.210 }' 00:09:37.210 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.210 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.470 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.470 [2024-09-28 08:47:15.428001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.729 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.730 "name": "Existed_Raid", 00:09:37.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.730 "strip_size_kb": 0, 00:09:37.730 "state": "configuring", 00:09:37.730 "raid_level": "raid1", 00:09:37.730 "superblock": false, 00:09:37.730 "num_base_bdevs": 3, 00:09:37.730 "num_base_bdevs_discovered": 1, 00:09:37.730 "num_base_bdevs_operational": 3, 00:09:37.730 "base_bdevs_list": [ 00:09:37.730 { 00:09:37.730 "name": null, 00:09:37.730 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:37.730 "is_configured": false, 00:09:37.730 "data_offset": 0, 00:09:37.730 "data_size": 65536 00:09:37.730 }, 00:09:37.730 { 00:09:37.730 "name": null, 00:09:37.730 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:37.730 "is_configured": false, 00:09:37.730 "data_offset": 0, 00:09:37.730 "data_size": 65536 00:09:37.730 }, 00:09:37.730 { 00:09:37.730 "name": "BaseBdev3", 00:09:37.730 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:37.730 "is_configured": true, 00:09:37.730 "data_offset": 0, 00:09:37.730 "data_size": 65536 00:09:37.730 } 00:09:37.730 ] 00:09:37.730 }' 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.730 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.989 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.989 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.989 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.989 08:47:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.249 08:47:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 [2024-09-28 08:47:16.021727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.249 "name": "Existed_Raid", 00:09:38.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.249 "strip_size_kb": 0, 00:09:38.249 "state": "configuring", 00:09:38.249 "raid_level": "raid1", 00:09:38.249 "superblock": false, 00:09:38.249 "num_base_bdevs": 3, 00:09:38.249 "num_base_bdevs_discovered": 2, 00:09:38.249 "num_base_bdevs_operational": 3, 00:09:38.249 "base_bdevs_list": [ 00:09:38.249 { 00:09:38.249 "name": null, 00:09:38.249 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:38.249 "is_configured": false, 00:09:38.249 "data_offset": 0, 00:09:38.249 "data_size": 65536 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "name": "BaseBdev2", 00:09:38.249 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:38.249 "is_configured": true, 00:09:38.249 "data_offset": 0, 00:09:38.249 "data_size": 65536 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "name": "BaseBdev3", 00:09:38.249 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:38.249 "is_configured": true, 00:09:38.249 "data_offset": 0, 00:09:38.249 "data_size": 65536 00:09:38.249 } 00:09:38.249 ] 00:09:38.249 }' 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.249 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.509 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.509 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.509 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.509 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.509 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21d9b515-3156-487b-96dc-b8330c6e946a 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 [2024-09-28 08:47:16.586844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:38.769 [2024-09-28 08:47:16.586971] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.769 [2024-09-28 08:47:16.586985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:38.769 [2024-09-28 08:47:16.587289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.769 [2024-09-28 08:47:16.587479] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.769 [2024-09-28 08:47:16.587492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:38.769 [2024-09-28 08:47:16.587799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.769 NewBaseBdev 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 [ 00:09:38.769 { 00:09:38.769 "name": "NewBaseBdev", 00:09:38.769 "aliases": [ 00:09:38.769 "21d9b515-3156-487b-96dc-b8330c6e946a" 00:09:38.769 ], 00:09:38.769 "product_name": "Malloc disk", 00:09:38.769 "block_size": 512, 00:09:38.769 "num_blocks": 65536, 00:09:38.769 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:38.769 "assigned_rate_limits": { 00:09:38.769 "rw_ios_per_sec": 0, 00:09:38.769 "rw_mbytes_per_sec": 0, 00:09:38.769 "r_mbytes_per_sec": 0, 00:09:38.769 "w_mbytes_per_sec": 0 00:09:38.769 }, 00:09:38.769 "claimed": true, 00:09:38.769 "claim_type": "exclusive_write", 00:09:38.769 "zoned": false, 00:09:38.769 "supported_io_types": { 00:09:38.769 "read": true, 00:09:38.769 "write": true, 00:09:38.769 "unmap": true, 00:09:38.769 "flush": true, 00:09:38.769 "reset": true, 00:09:38.769 "nvme_admin": false, 00:09:38.769 "nvme_io": false, 00:09:38.769 "nvme_io_md": false, 00:09:38.769 "write_zeroes": true, 00:09:38.769 "zcopy": true, 00:09:38.769 "get_zone_info": false, 00:09:38.769 "zone_management": false, 00:09:38.769 "zone_append": false, 00:09:38.769 "compare": false, 00:09:38.769 "compare_and_write": false, 00:09:38.769 "abort": true, 00:09:38.769 "seek_hole": false, 00:09:38.769 "seek_data": false, 00:09:38.769 "copy": true, 00:09:38.769 "nvme_iov_md": false 00:09:38.769 }, 00:09:38.769 "memory_domains": [ 00:09:38.769 { 00:09:38.769 "dma_device_id": "system", 00:09:38.769 "dma_device_type": 1 00:09:38.769 }, 00:09:38.769 { 00:09:38.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.769 "dma_device_type": 2 00:09:38.769 } 00:09:38.769 ], 00:09:38.769 "driver_specific": {} 00:09:38.769 } 00:09:38.769 ] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.769 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.769 "name": "Existed_Raid", 00:09:38.770 "uuid": "ff672d35-9c9e-43f4-a9d1-246bea034683", 00:09:38.770 "strip_size_kb": 0, 00:09:38.770 "state": "online", 00:09:38.770 "raid_level": "raid1", 00:09:38.770 "superblock": false, 00:09:38.770 "num_base_bdevs": 3, 00:09:38.770 "num_base_bdevs_discovered": 3, 00:09:38.770 "num_base_bdevs_operational": 3, 00:09:38.770 "base_bdevs_list": [ 00:09:38.770 { 00:09:38.770 "name": "NewBaseBdev", 00:09:38.770 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:38.770 "is_configured": true, 00:09:38.770 "data_offset": 0, 00:09:38.770 "data_size": 65536 00:09:38.770 }, 00:09:38.770 { 00:09:38.770 "name": "BaseBdev2", 00:09:38.770 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:38.770 "is_configured": true, 00:09:38.770 "data_offset": 0, 00:09:38.770 "data_size": 65536 00:09:38.770 }, 00:09:38.770 { 00:09:38.770 "name": "BaseBdev3", 00:09:38.770 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:38.770 "is_configured": true, 00:09:38.770 "data_offset": 0, 00:09:38.770 "data_size": 65536 00:09:38.770 } 00:09:38.770 ] 00:09:38.770 }' 00:09:38.770 08:47:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.770 08:47:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.339 [2024-09-28 08:47:17.046370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.339 "name": "Existed_Raid", 00:09:39.339 "aliases": [ 00:09:39.339 "ff672d35-9c9e-43f4-a9d1-246bea034683" 00:09:39.339 ], 00:09:39.339 "product_name": "Raid Volume", 00:09:39.339 "block_size": 512, 00:09:39.339 "num_blocks": 65536, 00:09:39.339 "uuid": "ff672d35-9c9e-43f4-a9d1-246bea034683", 00:09:39.339 "assigned_rate_limits": { 00:09:39.339 "rw_ios_per_sec": 0, 00:09:39.339 "rw_mbytes_per_sec": 0, 00:09:39.339 "r_mbytes_per_sec": 0, 00:09:39.339 "w_mbytes_per_sec": 0 00:09:39.339 }, 00:09:39.339 "claimed": false, 00:09:39.339 "zoned": false, 00:09:39.339 "supported_io_types": { 00:09:39.339 "read": true, 00:09:39.339 "write": true, 00:09:39.339 "unmap": false, 00:09:39.339 "flush": false, 00:09:39.339 "reset": true, 00:09:39.339 "nvme_admin": false, 00:09:39.339 "nvme_io": false, 00:09:39.339 "nvme_io_md": false, 00:09:39.339 "write_zeroes": true, 00:09:39.339 "zcopy": false, 00:09:39.339 "get_zone_info": false, 00:09:39.339 "zone_management": false, 00:09:39.339 "zone_append": false, 00:09:39.339 "compare": false, 00:09:39.339 "compare_and_write": false, 00:09:39.339 "abort": false, 00:09:39.339 "seek_hole": false, 00:09:39.339 "seek_data": false, 00:09:39.339 "copy": false, 00:09:39.339 "nvme_iov_md": false 00:09:39.339 }, 00:09:39.339 "memory_domains": [ 00:09:39.339 { 00:09:39.339 "dma_device_id": "system", 00:09:39.339 "dma_device_type": 1 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.339 "dma_device_type": 2 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "dma_device_id": "system", 00:09:39.339 "dma_device_type": 1 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.339 "dma_device_type": 2 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "dma_device_id": "system", 00:09:39.339 "dma_device_type": 1 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.339 "dma_device_type": 2 00:09:39.339 } 00:09:39.339 ], 00:09:39.339 "driver_specific": { 00:09:39.339 "raid": { 00:09:39.339 "uuid": "ff672d35-9c9e-43f4-a9d1-246bea034683", 00:09:39.339 "strip_size_kb": 0, 00:09:39.339 "state": "online", 00:09:39.339 "raid_level": "raid1", 00:09:39.339 "superblock": false, 00:09:39.339 "num_base_bdevs": 3, 00:09:39.339 "num_base_bdevs_discovered": 3, 00:09:39.339 "num_base_bdevs_operational": 3, 00:09:39.339 "base_bdevs_list": [ 00:09:39.339 { 00:09:39.339 "name": "NewBaseBdev", 00:09:39.339 "uuid": "21d9b515-3156-487b-96dc-b8330c6e946a", 00:09:39.339 "is_configured": true, 00:09:39.339 "data_offset": 0, 00:09:39.339 "data_size": 65536 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "name": "BaseBdev2", 00:09:39.339 "uuid": "fbd0afd0-37f5-4e6a-ac76-ab0f6b42fcfe", 00:09:39.339 "is_configured": true, 00:09:39.339 "data_offset": 0, 00:09:39.339 "data_size": 65536 00:09:39.339 }, 00:09:39.339 { 00:09:39.339 "name": "BaseBdev3", 00:09:39.339 "uuid": "f6d14e21-7c7f-48dc-ab52-4b65d683636d", 00:09:39.339 "is_configured": true, 00:09:39.339 "data_offset": 0, 00:09:39.339 "data_size": 65536 00:09:39.339 } 00:09:39.339 ] 00:09:39.339 } 00:09:39.339 } 00:09:39.339 }' 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.339 BaseBdev2 00:09:39.339 BaseBdev3' 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.339 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.340 [2024-09-28 08:47:17.257687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.340 [2024-09-28 08:47:17.257754] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.340 [2024-09-28 08:47:17.257857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.340 [2024-09-28 08:47:17.258179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.340 [2024-09-28 08:47:17.258230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67406 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67406 ']' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67406 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67406 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.340 killing process with pid 67406 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67406' 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67406 00:09:39.340 [2024-09-28 08:47:17.295617] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.340 08:47:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67406 00:09:39.909 [2024-09-28 08:47:17.616446] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.291 ************************************ 00:09:41.291 END TEST raid_state_function_test 00:09:41.291 ************************************ 00:09:41.291 08:47:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.291 00:09:41.291 real 0m10.634s 00:09:41.291 user 0m16.588s 00:09:41.291 sys 0m1.912s 00:09:41.291 08:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.291 08:47:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.291 08:47:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:41.291 08:47:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:41.291 08:47:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.291 08:47:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.291 ************************************ 00:09:41.291 START TEST raid_state_function_test_sb 00:09:41.291 ************************************ 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:41.291 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:41.292 Process raid pid: 68034 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68034 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68034' 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68034 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68034 ']' 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.292 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.292 [2024-09-28 08:47:19.126812] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:41.292 [2024-09-28 08:47:19.126950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.552 [2024-09-28 08:47:19.289004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.552 [2024-09-28 08:47:19.527973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.812 [2024-09-28 08:47:19.761451] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.812 [2024-09-28 08:47:19.761484] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.072 [2024-09-28 08:47:19.951977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.072 [2024-09-28 08:47:19.952030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.072 [2024-09-28 08:47:19.952050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.072 [2024-09-28 08:47:19.952059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.072 [2024-09-28 08:47:19.952065] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.072 [2024-09-28 08:47:19.952074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.072 08:47:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.072 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.072 "name": "Existed_Raid", 00:09:42.072 "uuid": "6e6dbf92-ccb4-49e2-a6f2-d6ebaf384d98", 00:09:42.072 "strip_size_kb": 0, 00:09:42.072 "state": "configuring", 00:09:42.072 "raid_level": "raid1", 00:09:42.072 "superblock": true, 00:09:42.072 "num_base_bdevs": 3, 00:09:42.072 "num_base_bdevs_discovered": 0, 00:09:42.072 "num_base_bdevs_operational": 3, 00:09:42.072 "base_bdevs_list": [ 00:09:42.072 { 00:09:42.072 "name": "BaseBdev1", 00:09:42.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.072 "is_configured": false, 00:09:42.072 "data_offset": 0, 00:09:42.072 "data_size": 0 00:09:42.072 }, 00:09:42.072 { 00:09:42.072 "name": "BaseBdev2", 00:09:42.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.072 "is_configured": false, 00:09:42.072 "data_offset": 0, 00:09:42.072 "data_size": 0 00:09:42.072 }, 00:09:42.072 { 00:09:42.072 "name": "BaseBdev3", 00:09:42.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.072 "is_configured": false, 00:09:42.072 "data_offset": 0, 00:09:42.072 "data_size": 0 00:09:42.072 } 00:09:42.072 ] 00:09:42.072 }' 00:09:42.072 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.072 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.642 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.642 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 [2024-09-28 08:47:20.367156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.643 [2024-09-28 08:47:20.367238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 [2024-09-28 08:47:20.379168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.643 [2024-09-28 08:47:20.379259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.643 [2024-09-28 08:47:20.379286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.643 [2024-09-28 08:47:20.379309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.643 [2024-09-28 08:47:20.379326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.643 [2024-09-28 08:47:20.379346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 [2024-09-28 08:47:20.468166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.643 BaseBdev1 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 [ 00:09:42.643 { 00:09:42.643 "name": "BaseBdev1", 00:09:42.643 "aliases": [ 00:09:42.643 "91dec13a-9343-499e-9b8c-53383127d754" 00:09:42.643 ], 00:09:42.643 "product_name": "Malloc disk", 00:09:42.643 "block_size": 512, 00:09:42.643 "num_blocks": 65536, 00:09:42.643 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:42.643 "assigned_rate_limits": { 00:09:42.643 "rw_ios_per_sec": 0, 00:09:42.643 "rw_mbytes_per_sec": 0, 00:09:42.643 "r_mbytes_per_sec": 0, 00:09:42.643 "w_mbytes_per_sec": 0 00:09:42.643 }, 00:09:42.643 "claimed": true, 00:09:42.643 "claim_type": "exclusive_write", 00:09:42.643 "zoned": false, 00:09:42.643 "supported_io_types": { 00:09:42.643 "read": true, 00:09:42.643 "write": true, 00:09:42.643 "unmap": true, 00:09:42.643 "flush": true, 00:09:42.643 "reset": true, 00:09:42.643 "nvme_admin": false, 00:09:42.643 "nvme_io": false, 00:09:42.643 "nvme_io_md": false, 00:09:42.643 "write_zeroes": true, 00:09:42.643 "zcopy": true, 00:09:42.643 "get_zone_info": false, 00:09:42.643 "zone_management": false, 00:09:42.643 "zone_append": false, 00:09:42.643 "compare": false, 00:09:42.643 "compare_and_write": false, 00:09:42.643 "abort": true, 00:09:42.643 "seek_hole": false, 00:09:42.643 "seek_data": false, 00:09:42.643 "copy": true, 00:09:42.643 "nvme_iov_md": false 00:09:42.643 }, 00:09:42.643 "memory_domains": [ 00:09:42.643 { 00:09:42.643 "dma_device_id": "system", 00:09:42.643 "dma_device_type": 1 00:09:42.643 }, 00:09:42.643 { 00:09:42.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.643 "dma_device_type": 2 00:09:42.643 } 00:09:42.643 ], 00:09:42.643 "driver_specific": {} 00:09:42.643 } 00:09:42.643 ] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.643 "name": "Existed_Raid", 00:09:42.643 "uuid": "74f64a99-d843-4bb7-a792-2a6cdd55e9d7", 00:09:42.643 "strip_size_kb": 0, 00:09:42.643 "state": "configuring", 00:09:42.643 "raid_level": "raid1", 00:09:42.643 "superblock": true, 00:09:42.643 "num_base_bdevs": 3, 00:09:42.643 "num_base_bdevs_discovered": 1, 00:09:42.643 "num_base_bdevs_operational": 3, 00:09:42.643 "base_bdevs_list": [ 00:09:42.643 { 00:09:42.643 "name": "BaseBdev1", 00:09:42.643 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:42.643 "is_configured": true, 00:09:42.643 "data_offset": 2048, 00:09:42.643 "data_size": 63488 00:09:42.643 }, 00:09:42.643 { 00:09:42.643 "name": "BaseBdev2", 00:09:42.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.643 "is_configured": false, 00:09:42.643 "data_offset": 0, 00:09:42.643 "data_size": 0 00:09:42.643 }, 00:09:42.643 { 00:09:42.643 "name": "BaseBdev3", 00:09:42.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.643 "is_configured": false, 00:09:42.643 "data_offset": 0, 00:09:42.643 "data_size": 0 00:09:42.643 } 00:09:42.643 ] 00:09:42.643 }' 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.643 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 [2024-09-28 08:47:20.935384] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.214 [2024-09-28 08:47:20.935476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 [2024-09-28 08:47:20.947423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.214 [2024-09-28 08:47:20.949543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.214 [2024-09-28 08:47:20.949586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.214 [2024-09-28 08:47:20.949595] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.214 [2024-09-28 08:47:20.949604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.214 "name": "Existed_Raid", 00:09:43.214 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:43.214 "strip_size_kb": 0, 00:09:43.214 "state": "configuring", 00:09:43.214 "raid_level": "raid1", 00:09:43.214 "superblock": true, 00:09:43.214 "num_base_bdevs": 3, 00:09:43.214 "num_base_bdevs_discovered": 1, 00:09:43.214 "num_base_bdevs_operational": 3, 00:09:43.214 "base_bdevs_list": [ 00:09:43.214 { 00:09:43.214 "name": "BaseBdev1", 00:09:43.214 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:43.214 "is_configured": true, 00:09:43.214 "data_offset": 2048, 00:09:43.214 "data_size": 63488 00:09:43.214 }, 00:09:43.214 { 00:09:43.214 "name": "BaseBdev2", 00:09:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.214 "is_configured": false, 00:09:43.214 "data_offset": 0, 00:09:43.214 "data_size": 0 00:09:43.214 }, 00:09:43.214 { 00:09:43.214 "name": "BaseBdev3", 00:09:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.214 "is_configured": false, 00:09:43.214 "data_offset": 0, 00:09:43.214 "data_size": 0 00:09:43.214 } 00:09:43.214 ] 00:09:43.214 }' 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.214 08:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 [2024-09-28 08:47:21.390507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.474 BaseBdev2 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 [ 00:09:43.474 { 00:09:43.474 "name": "BaseBdev2", 00:09:43.474 "aliases": [ 00:09:43.474 "2a7511ec-fa5c-4c11-ad0d-341ec115b11a" 00:09:43.474 ], 00:09:43.474 "product_name": "Malloc disk", 00:09:43.474 "block_size": 512, 00:09:43.474 "num_blocks": 65536, 00:09:43.474 "uuid": "2a7511ec-fa5c-4c11-ad0d-341ec115b11a", 00:09:43.474 "assigned_rate_limits": { 00:09:43.474 "rw_ios_per_sec": 0, 00:09:43.474 "rw_mbytes_per_sec": 0, 00:09:43.474 "r_mbytes_per_sec": 0, 00:09:43.474 "w_mbytes_per_sec": 0 00:09:43.474 }, 00:09:43.474 "claimed": true, 00:09:43.474 "claim_type": "exclusive_write", 00:09:43.474 "zoned": false, 00:09:43.474 "supported_io_types": { 00:09:43.474 "read": true, 00:09:43.474 "write": true, 00:09:43.474 "unmap": true, 00:09:43.474 "flush": true, 00:09:43.474 "reset": true, 00:09:43.474 "nvme_admin": false, 00:09:43.474 "nvme_io": false, 00:09:43.474 "nvme_io_md": false, 00:09:43.474 "write_zeroes": true, 00:09:43.474 "zcopy": true, 00:09:43.474 "get_zone_info": false, 00:09:43.474 "zone_management": false, 00:09:43.474 "zone_append": false, 00:09:43.474 "compare": false, 00:09:43.474 "compare_and_write": false, 00:09:43.474 "abort": true, 00:09:43.474 "seek_hole": false, 00:09:43.474 "seek_data": false, 00:09:43.474 "copy": true, 00:09:43.474 "nvme_iov_md": false 00:09:43.474 }, 00:09:43.474 "memory_domains": [ 00:09:43.474 { 00:09:43.474 "dma_device_id": "system", 00:09:43.474 "dma_device_type": 1 00:09:43.474 }, 00:09:43.474 { 00:09:43.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.474 "dma_device_type": 2 00:09:43.474 } 00:09:43.474 ], 00:09:43.474 "driver_specific": {} 00:09:43.474 } 00:09:43.474 ] 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.474 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.733 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.733 "name": "Existed_Raid", 00:09:43.733 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:43.733 "strip_size_kb": 0, 00:09:43.733 "state": "configuring", 00:09:43.733 "raid_level": "raid1", 00:09:43.733 "superblock": true, 00:09:43.733 "num_base_bdevs": 3, 00:09:43.733 "num_base_bdevs_discovered": 2, 00:09:43.733 "num_base_bdevs_operational": 3, 00:09:43.733 "base_bdevs_list": [ 00:09:43.733 { 00:09:43.733 "name": "BaseBdev1", 00:09:43.733 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:43.733 "is_configured": true, 00:09:43.733 "data_offset": 2048, 00:09:43.733 "data_size": 63488 00:09:43.733 }, 00:09:43.733 { 00:09:43.733 "name": "BaseBdev2", 00:09:43.734 "uuid": "2a7511ec-fa5c-4c11-ad0d-341ec115b11a", 00:09:43.734 "is_configured": true, 00:09:43.734 "data_offset": 2048, 00:09:43.734 "data_size": 63488 00:09:43.734 }, 00:09:43.734 { 00:09:43.734 "name": "BaseBdev3", 00:09:43.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.734 "is_configured": false, 00:09:43.734 "data_offset": 0, 00:09:43.734 "data_size": 0 00:09:43.734 } 00:09:43.734 ] 00:09:43.734 }' 00:09:43.734 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.734 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.993 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.993 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.993 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.993 [2024-09-28 08:47:21.890862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.993 [2024-09-28 08:47:21.891269] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.993 [2024-09-28 08:47:21.891302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.993 [2024-09-28 08:47:21.891603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.993 [2024-09-28 08:47:21.891793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.994 [2024-09-28 08:47:21.891803] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.994 BaseBdev3 00:09:43.994 [2024-09-28 08:47:21.891954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.994 [ 00:09:43.994 { 00:09:43.994 "name": "BaseBdev3", 00:09:43.994 "aliases": [ 00:09:43.994 "0f7505cc-c4b8-4eb8-977c-37f585038952" 00:09:43.994 ], 00:09:43.994 "product_name": "Malloc disk", 00:09:43.994 "block_size": 512, 00:09:43.994 "num_blocks": 65536, 00:09:43.994 "uuid": "0f7505cc-c4b8-4eb8-977c-37f585038952", 00:09:43.994 "assigned_rate_limits": { 00:09:43.994 "rw_ios_per_sec": 0, 00:09:43.994 "rw_mbytes_per_sec": 0, 00:09:43.994 "r_mbytes_per_sec": 0, 00:09:43.994 "w_mbytes_per_sec": 0 00:09:43.994 }, 00:09:43.994 "claimed": true, 00:09:43.994 "claim_type": "exclusive_write", 00:09:43.994 "zoned": false, 00:09:43.994 "supported_io_types": { 00:09:43.994 "read": true, 00:09:43.994 "write": true, 00:09:43.994 "unmap": true, 00:09:43.994 "flush": true, 00:09:43.994 "reset": true, 00:09:43.994 "nvme_admin": false, 00:09:43.994 "nvme_io": false, 00:09:43.994 "nvme_io_md": false, 00:09:43.994 "write_zeroes": true, 00:09:43.994 "zcopy": true, 00:09:43.994 "get_zone_info": false, 00:09:43.994 "zone_management": false, 00:09:43.994 "zone_append": false, 00:09:43.994 "compare": false, 00:09:43.994 "compare_and_write": false, 00:09:43.994 "abort": true, 00:09:43.994 "seek_hole": false, 00:09:43.994 "seek_data": false, 00:09:43.994 "copy": true, 00:09:43.994 "nvme_iov_md": false 00:09:43.994 }, 00:09:43.994 "memory_domains": [ 00:09:43.994 { 00:09:43.994 "dma_device_id": "system", 00:09:43.994 "dma_device_type": 1 00:09:43.994 }, 00:09:43.994 { 00:09:43.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.994 "dma_device_type": 2 00:09:43.994 } 00:09:43.994 ], 00:09:43.994 "driver_specific": {} 00:09:43.994 } 00:09:43.994 ] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.994 "name": "Existed_Raid", 00:09:43.994 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:43.994 "strip_size_kb": 0, 00:09:43.994 "state": "online", 00:09:43.994 "raid_level": "raid1", 00:09:43.994 "superblock": true, 00:09:43.994 "num_base_bdevs": 3, 00:09:43.994 "num_base_bdevs_discovered": 3, 00:09:43.994 "num_base_bdevs_operational": 3, 00:09:43.994 "base_bdevs_list": [ 00:09:43.994 { 00:09:43.994 "name": "BaseBdev1", 00:09:43.994 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:43.994 "is_configured": true, 00:09:43.994 "data_offset": 2048, 00:09:43.994 "data_size": 63488 00:09:43.994 }, 00:09:43.994 { 00:09:43.994 "name": "BaseBdev2", 00:09:43.994 "uuid": "2a7511ec-fa5c-4c11-ad0d-341ec115b11a", 00:09:43.994 "is_configured": true, 00:09:43.994 "data_offset": 2048, 00:09:43.994 "data_size": 63488 00:09:43.994 }, 00:09:43.994 { 00:09:43.994 "name": "BaseBdev3", 00:09:43.994 "uuid": "0f7505cc-c4b8-4eb8-977c-37f585038952", 00:09:43.994 "is_configured": true, 00:09:43.994 "data_offset": 2048, 00:09:43.994 "data_size": 63488 00:09:43.994 } 00:09:43.994 ] 00:09:43.994 }' 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.994 08:47:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.563 [2024-09-28 08:47:22.338414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.563 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.563 "name": "Existed_Raid", 00:09:44.563 "aliases": [ 00:09:44.563 "08940418-d566-449a-9a14-9fc1e399e82d" 00:09:44.563 ], 00:09:44.563 "product_name": "Raid Volume", 00:09:44.563 "block_size": 512, 00:09:44.563 "num_blocks": 63488, 00:09:44.563 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:44.563 "assigned_rate_limits": { 00:09:44.563 "rw_ios_per_sec": 0, 00:09:44.563 "rw_mbytes_per_sec": 0, 00:09:44.563 "r_mbytes_per_sec": 0, 00:09:44.563 "w_mbytes_per_sec": 0 00:09:44.563 }, 00:09:44.563 "claimed": false, 00:09:44.563 "zoned": false, 00:09:44.563 "supported_io_types": { 00:09:44.563 "read": true, 00:09:44.563 "write": true, 00:09:44.563 "unmap": false, 00:09:44.563 "flush": false, 00:09:44.563 "reset": true, 00:09:44.563 "nvme_admin": false, 00:09:44.563 "nvme_io": false, 00:09:44.563 "nvme_io_md": false, 00:09:44.563 "write_zeroes": true, 00:09:44.563 "zcopy": false, 00:09:44.563 "get_zone_info": false, 00:09:44.563 "zone_management": false, 00:09:44.563 "zone_append": false, 00:09:44.563 "compare": false, 00:09:44.563 "compare_and_write": false, 00:09:44.563 "abort": false, 00:09:44.563 "seek_hole": false, 00:09:44.563 "seek_data": false, 00:09:44.563 "copy": false, 00:09:44.563 "nvme_iov_md": false 00:09:44.563 }, 00:09:44.563 "memory_domains": [ 00:09:44.563 { 00:09:44.563 "dma_device_id": "system", 00:09:44.563 "dma_device_type": 1 00:09:44.563 }, 00:09:44.563 { 00:09:44.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.564 "dma_device_type": 2 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "dma_device_id": "system", 00:09:44.564 "dma_device_type": 1 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.564 "dma_device_type": 2 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "dma_device_id": "system", 00:09:44.564 "dma_device_type": 1 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.564 "dma_device_type": 2 00:09:44.564 } 00:09:44.564 ], 00:09:44.564 "driver_specific": { 00:09:44.564 "raid": { 00:09:44.564 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:44.564 "strip_size_kb": 0, 00:09:44.564 "state": "online", 00:09:44.564 "raid_level": "raid1", 00:09:44.564 "superblock": true, 00:09:44.564 "num_base_bdevs": 3, 00:09:44.564 "num_base_bdevs_discovered": 3, 00:09:44.564 "num_base_bdevs_operational": 3, 00:09:44.564 "base_bdevs_list": [ 00:09:44.564 { 00:09:44.564 "name": "BaseBdev1", 00:09:44.564 "uuid": "91dec13a-9343-499e-9b8c-53383127d754", 00:09:44.564 "is_configured": true, 00:09:44.564 "data_offset": 2048, 00:09:44.564 "data_size": 63488 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "name": "BaseBdev2", 00:09:44.564 "uuid": "2a7511ec-fa5c-4c11-ad0d-341ec115b11a", 00:09:44.564 "is_configured": true, 00:09:44.564 "data_offset": 2048, 00:09:44.564 "data_size": 63488 00:09:44.564 }, 00:09:44.564 { 00:09:44.564 "name": "BaseBdev3", 00:09:44.564 "uuid": "0f7505cc-c4b8-4eb8-977c-37f585038952", 00:09:44.564 "is_configured": true, 00:09:44.564 "data_offset": 2048, 00:09:44.564 "data_size": 63488 00:09:44.564 } 00:09:44.564 ] 00:09:44.564 } 00:09:44.564 } 00:09:44.564 }' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:44.564 BaseBdev2 00:09:44.564 BaseBdev3' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.564 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 [2024-09-28 08:47:22.573780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.824 "name": "Existed_Raid", 00:09:44.824 "uuid": "08940418-d566-449a-9a14-9fc1e399e82d", 00:09:44.824 "strip_size_kb": 0, 00:09:44.824 "state": "online", 00:09:44.824 "raid_level": "raid1", 00:09:44.824 "superblock": true, 00:09:44.824 "num_base_bdevs": 3, 00:09:44.824 "num_base_bdevs_discovered": 2, 00:09:44.824 "num_base_bdevs_operational": 2, 00:09:44.824 "base_bdevs_list": [ 00:09:44.824 { 00:09:44.824 "name": null, 00:09:44.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.824 "is_configured": false, 00:09:44.824 "data_offset": 0, 00:09:44.824 "data_size": 63488 00:09:44.824 }, 00:09:44.824 { 00:09:44.824 "name": "BaseBdev2", 00:09:44.824 "uuid": "2a7511ec-fa5c-4c11-ad0d-341ec115b11a", 00:09:44.824 "is_configured": true, 00:09:44.824 "data_offset": 2048, 00:09:44.824 "data_size": 63488 00:09:44.824 }, 00:09:44.824 { 00:09:44.824 "name": "BaseBdev3", 00:09:44.824 "uuid": "0f7505cc-c4b8-4eb8-977c-37f585038952", 00:09:44.824 "is_configured": true, 00:09:44.824 "data_offset": 2048, 00:09:44.824 "data_size": 63488 00:09:44.824 } 00:09:44.824 ] 00:09:44.824 }' 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.824 08:47:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-09-28 08:47:23.180782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-09-28 08:47:23.339019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.394 [2024-09-28 08:47:23.339199] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.654 [2024-09-28 08:47:23.438668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.654 [2024-09-28 08:47:23.438839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.654 [2024-09-28 08:47:23.438887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.654 BaseBdev2 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.654 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.654 [ 00:09:45.654 { 00:09:45.654 "name": "BaseBdev2", 00:09:45.654 "aliases": [ 00:09:45.654 "07f19e6a-4206-40e6-8eab-f6752e16216a" 00:09:45.654 ], 00:09:45.654 "product_name": "Malloc disk", 00:09:45.654 "block_size": 512, 00:09:45.654 "num_blocks": 65536, 00:09:45.654 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:45.654 "assigned_rate_limits": { 00:09:45.654 "rw_ios_per_sec": 0, 00:09:45.654 "rw_mbytes_per_sec": 0, 00:09:45.654 "r_mbytes_per_sec": 0, 00:09:45.654 "w_mbytes_per_sec": 0 00:09:45.654 }, 00:09:45.654 "claimed": false, 00:09:45.654 "zoned": false, 00:09:45.654 "supported_io_types": { 00:09:45.654 "read": true, 00:09:45.654 "write": true, 00:09:45.654 "unmap": true, 00:09:45.654 "flush": true, 00:09:45.654 "reset": true, 00:09:45.654 "nvme_admin": false, 00:09:45.654 "nvme_io": false, 00:09:45.654 "nvme_io_md": false, 00:09:45.654 "write_zeroes": true, 00:09:45.654 "zcopy": true, 00:09:45.654 "get_zone_info": false, 00:09:45.654 "zone_management": false, 00:09:45.654 "zone_append": false, 00:09:45.654 "compare": false, 00:09:45.654 "compare_and_write": false, 00:09:45.654 "abort": true, 00:09:45.654 "seek_hole": false, 00:09:45.654 "seek_data": false, 00:09:45.654 "copy": true, 00:09:45.654 "nvme_iov_md": false 00:09:45.654 }, 00:09:45.654 "memory_domains": [ 00:09:45.654 { 00:09:45.654 "dma_device_id": "system", 00:09:45.654 "dma_device_type": 1 00:09:45.654 }, 00:09:45.654 { 00:09:45.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.654 "dma_device_type": 2 00:09:45.654 } 00:09:45.654 ], 00:09:45.654 "driver_specific": {} 00:09:45.654 } 00:09:45.654 ] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 BaseBdev3 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 [ 00:09:45.655 { 00:09:45.655 "name": "BaseBdev3", 00:09:45.655 "aliases": [ 00:09:45.655 "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4" 00:09:45.655 ], 00:09:45.655 "product_name": "Malloc disk", 00:09:45.655 "block_size": 512, 00:09:45.655 "num_blocks": 65536, 00:09:45.655 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:45.655 "assigned_rate_limits": { 00:09:45.655 "rw_ios_per_sec": 0, 00:09:45.655 "rw_mbytes_per_sec": 0, 00:09:45.655 "r_mbytes_per_sec": 0, 00:09:45.655 "w_mbytes_per_sec": 0 00:09:45.655 }, 00:09:45.655 "claimed": false, 00:09:45.655 "zoned": false, 00:09:45.655 "supported_io_types": { 00:09:45.655 "read": true, 00:09:45.655 "write": true, 00:09:45.655 "unmap": true, 00:09:45.655 "flush": true, 00:09:45.655 "reset": true, 00:09:45.655 "nvme_admin": false, 00:09:45.655 "nvme_io": false, 00:09:45.655 "nvme_io_md": false, 00:09:45.655 "write_zeroes": true, 00:09:45.655 "zcopy": true, 00:09:45.655 "get_zone_info": false, 00:09:45.655 "zone_management": false, 00:09:45.655 "zone_append": false, 00:09:45.655 "compare": false, 00:09:45.655 "compare_and_write": false, 00:09:45.655 "abort": true, 00:09:45.655 "seek_hole": false, 00:09:45.655 "seek_data": false, 00:09:45.655 "copy": true, 00:09:45.655 "nvme_iov_md": false 00:09:45.655 }, 00:09:45.655 "memory_domains": [ 00:09:45.655 { 00:09:45.655 "dma_device_id": "system", 00:09:45.655 "dma_device_type": 1 00:09:45.655 }, 00:09:45.655 { 00:09:45.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.655 "dma_device_type": 2 00:09:45.655 } 00:09:45.655 ], 00:09:45.655 "driver_specific": {} 00:09:45.655 } 00:09:45.655 ] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.655 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 [2024-09-28 08:47:23.641503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.655 [2024-09-28 08:47:23.641552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.655 [2024-09-28 08:47:23.641572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.655 [2024-09-28 08:47:23.643625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.915 "name": "Existed_Raid", 00:09:45.915 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:45.915 "strip_size_kb": 0, 00:09:45.915 "state": "configuring", 00:09:45.915 "raid_level": "raid1", 00:09:45.915 "superblock": true, 00:09:45.915 "num_base_bdevs": 3, 00:09:45.915 "num_base_bdevs_discovered": 2, 00:09:45.915 "num_base_bdevs_operational": 3, 00:09:45.915 "base_bdevs_list": [ 00:09:45.915 { 00:09:45.915 "name": "BaseBdev1", 00:09:45.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.915 "is_configured": false, 00:09:45.915 "data_offset": 0, 00:09:45.915 "data_size": 0 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "name": "BaseBdev2", 00:09:45.915 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 }, 00:09:45.915 { 00:09:45.915 "name": "BaseBdev3", 00:09:45.915 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:45.915 "is_configured": true, 00:09:45.915 "data_offset": 2048, 00:09:45.915 "data_size": 63488 00:09:45.915 } 00:09:45.915 ] 00:09:45.915 }' 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.915 08:47:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.176 [2024-09-28 08:47:24.032822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.176 "name": "Existed_Raid", 00:09:46.176 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:46.176 "strip_size_kb": 0, 00:09:46.176 "state": "configuring", 00:09:46.176 "raid_level": "raid1", 00:09:46.176 "superblock": true, 00:09:46.176 "num_base_bdevs": 3, 00:09:46.176 "num_base_bdevs_discovered": 1, 00:09:46.176 "num_base_bdevs_operational": 3, 00:09:46.176 "base_bdevs_list": [ 00:09:46.176 { 00:09:46.176 "name": "BaseBdev1", 00:09:46.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.176 "is_configured": false, 00:09:46.176 "data_offset": 0, 00:09:46.176 "data_size": 0 00:09:46.176 }, 00:09:46.176 { 00:09:46.176 "name": null, 00:09:46.176 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:46.176 "is_configured": false, 00:09:46.176 "data_offset": 0, 00:09:46.176 "data_size": 63488 00:09:46.176 }, 00:09:46.176 { 00:09:46.176 "name": "BaseBdev3", 00:09:46.176 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:46.176 "is_configured": true, 00:09:46.176 "data_offset": 2048, 00:09:46.176 "data_size": 63488 00:09:46.176 } 00:09:46.176 ] 00:09:46.176 }' 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.176 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 [2024-09-28 08:47:24.589639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.745 BaseBdev1 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.745 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 [ 00:09:46.745 { 00:09:46.745 "name": "BaseBdev1", 00:09:46.745 "aliases": [ 00:09:46.745 "2190c613-2901-403a-a8a1-86c3599a65ec" 00:09:46.745 ], 00:09:46.745 "product_name": "Malloc disk", 00:09:46.745 "block_size": 512, 00:09:46.745 "num_blocks": 65536, 00:09:46.745 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:46.745 "assigned_rate_limits": { 00:09:46.745 "rw_ios_per_sec": 0, 00:09:46.745 "rw_mbytes_per_sec": 0, 00:09:46.745 "r_mbytes_per_sec": 0, 00:09:46.745 "w_mbytes_per_sec": 0 00:09:46.745 }, 00:09:46.745 "claimed": true, 00:09:46.745 "claim_type": "exclusive_write", 00:09:46.745 "zoned": false, 00:09:46.745 "supported_io_types": { 00:09:46.745 "read": true, 00:09:46.745 "write": true, 00:09:46.745 "unmap": true, 00:09:46.745 "flush": true, 00:09:46.745 "reset": true, 00:09:46.745 "nvme_admin": false, 00:09:46.745 "nvme_io": false, 00:09:46.745 "nvme_io_md": false, 00:09:46.745 "write_zeroes": true, 00:09:46.745 "zcopy": true, 00:09:46.746 "get_zone_info": false, 00:09:46.746 "zone_management": false, 00:09:46.746 "zone_append": false, 00:09:46.746 "compare": false, 00:09:46.746 "compare_and_write": false, 00:09:46.746 "abort": true, 00:09:46.746 "seek_hole": false, 00:09:46.746 "seek_data": false, 00:09:46.746 "copy": true, 00:09:46.746 "nvme_iov_md": false 00:09:46.746 }, 00:09:46.746 "memory_domains": [ 00:09:46.746 { 00:09:46.746 "dma_device_id": "system", 00:09:46.746 "dma_device_type": 1 00:09:46.746 }, 00:09:46.746 { 00:09:46.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.746 "dma_device_type": 2 00:09:46.746 } 00:09:46.746 ], 00:09:46.746 "driver_specific": {} 00:09:46.746 } 00:09:46.746 ] 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.746 "name": "Existed_Raid", 00:09:46.746 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:46.746 "strip_size_kb": 0, 00:09:46.746 "state": "configuring", 00:09:46.746 "raid_level": "raid1", 00:09:46.746 "superblock": true, 00:09:46.746 "num_base_bdevs": 3, 00:09:46.746 "num_base_bdevs_discovered": 2, 00:09:46.746 "num_base_bdevs_operational": 3, 00:09:46.746 "base_bdevs_list": [ 00:09:46.746 { 00:09:46.746 "name": "BaseBdev1", 00:09:46.746 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:46.746 "is_configured": true, 00:09:46.746 "data_offset": 2048, 00:09:46.746 "data_size": 63488 00:09:46.746 }, 00:09:46.746 { 00:09:46.746 "name": null, 00:09:46.746 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:46.746 "is_configured": false, 00:09:46.746 "data_offset": 0, 00:09:46.746 "data_size": 63488 00:09:46.746 }, 00:09:46.746 { 00:09:46.746 "name": "BaseBdev3", 00:09:46.746 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:46.746 "is_configured": true, 00:09:46.746 "data_offset": 2048, 00:09:46.746 "data_size": 63488 00:09:46.746 } 00:09:46.746 ] 00:09:46.746 }' 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.746 08:47:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.314 [2024-09-28 08:47:25.104797] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.314 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.314 "name": "Existed_Raid", 00:09:47.314 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:47.314 "strip_size_kb": 0, 00:09:47.314 "state": "configuring", 00:09:47.314 "raid_level": "raid1", 00:09:47.314 "superblock": true, 00:09:47.314 "num_base_bdevs": 3, 00:09:47.314 "num_base_bdevs_discovered": 1, 00:09:47.314 "num_base_bdevs_operational": 3, 00:09:47.314 "base_bdevs_list": [ 00:09:47.314 { 00:09:47.314 "name": "BaseBdev1", 00:09:47.314 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:47.314 "is_configured": true, 00:09:47.314 "data_offset": 2048, 00:09:47.314 "data_size": 63488 00:09:47.314 }, 00:09:47.314 { 00:09:47.314 "name": null, 00:09:47.314 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:47.314 "is_configured": false, 00:09:47.314 "data_offset": 0, 00:09:47.315 "data_size": 63488 00:09:47.315 }, 00:09:47.315 { 00:09:47.315 "name": null, 00:09:47.315 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:47.315 "is_configured": false, 00:09:47.315 "data_offset": 0, 00:09:47.315 "data_size": 63488 00:09:47.315 } 00:09:47.315 ] 00:09:47.315 }' 00:09:47.315 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.315 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.574 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.833 [2024-09-28 08:47:25.572016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.833 "name": "Existed_Raid", 00:09:47.833 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:47.833 "strip_size_kb": 0, 00:09:47.833 "state": "configuring", 00:09:47.833 "raid_level": "raid1", 00:09:47.833 "superblock": true, 00:09:47.833 "num_base_bdevs": 3, 00:09:47.833 "num_base_bdevs_discovered": 2, 00:09:47.833 "num_base_bdevs_operational": 3, 00:09:47.833 "base_bdevs_list": [ 00:09:47.833 { 00:09:47.833 "name": "BaseBdev1", 00:09:47.833 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:47.833 "is_configured": true, 00:09:47.833 "data_offset": 2048, 00:09:47.833 "data_size": 63488 00:09:47.833 }, 00:09:47.833 { 00:09:47.833 "name": null, 00:09:47.833 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:47.833 "is_configured": false, 00:09:47.833 "data_offset": 0, 00:09:47.833 "data_size": 63488 00:09:47.833 }, 00:09:47.833 { 00:09:47.833 "name": "BaseBdev3", 00:09:47.833 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:47.833 "is_configured": true, 00:09:47.833 "data_offset": 2048, 00:09:47.833 "data_size": 63488 00:09:47.833 } 00:09:47.833 ] 00:09:47.833 }' 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.833 08:47:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.093 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.093 [2024-09-28 08:47:26.043323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.352 "name": "Existed_Raid", 00:09:48.352 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:48.352 "strip_size_kb": 0, 00:09:48.352 "state": "configuring", 00:09:48.352 "raid_level": "raid1", 00:09:48.352 "superblock": true, 00:09:48.352 "num_base_bdevs": 3, 00:09:48.352 "num_base_bdevs_discovered": 1, 00:09:48.352 "num_base_bdevs_operational": 3, 00:09:48.352 "base_bdevs_list": [ 00:09:48.352 { 00:09:48.352 "name": null, 00:09:48.352 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:48.352 "is_configured": false, 00:09:48.352 "data_offset": 0, 00:09:48.352 "data_size": 63488 00:09:48.352 }, 00:09:48.352 { 00:09:48.352 "name": null, 00:09:48.352 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:48.352 "is_configured": false, 00:09:48.352 "data_offset": 0, 00:09:48.352 "data_size": 63488 00:09:48.352 }, 00:09:48.352 { 00:09:48.352 "name": "BaseBdev3", 00:09:48.352 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:48.352 "is_configured": true, 00:09:48.352 "data_offset": 2048, 00:09:48.352 "data_size": 63488 00:09:48.352 } 00:09:48.352 ] 00:09:48.352 }' 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.352 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.611 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.611 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.611 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.611 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.611 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.870 [2024-09-28 08:47:26.634743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.870 "name": "Existed_Raid", 00:09:48.870 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:48.870 "strip_size_kb": 0, 00:09:48.870 "state": "configuring", 00:09:48.870 "raid_level": "raid1", 00:09:48.870 "superblock": true, 00:09:48.870 "num_base_bdevs": 3, 00:09:48.870 "num_base_bdevs_discovered": 2, 00:09:48.870 "num_base_bdevs_operational": 3, 00:09:48.870 "base_bdevs_list": [ 00:09:48.870 { 00:09:48.870 "name": null, 00:09:48.870 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:48.870 "is_configured": false, 00:09:48.870 "data_offset": 0, 00:09:48.870 "data_size": 63488 00:09:48.870 }, 00:09:48.870 { 00:09:48.870 "name": "BaseBdev2", 00:09:48.870 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:48.870 "is_configured": true, 00:09:48.870 "data_offset": 2048, 00:09:48.870 "data_size": 63488 00:09:48.870 }, 00:09:48.870 { 00:09:48.870 "name": "BaseBdev3", 00:09:48.870 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:48.870 "is_configured": true, 00:09:48.870 "data_offset": 2048, 00:09:48.870 "data_size": 63488 00:09:48.870 } 00:09:48.870 ] 00:09:48.870 }' 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.870 08:47:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.129 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:49.129 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.129 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.129 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.129 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2190c613-2901-403a-a8a1-86c3599a65ec 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.389 [2024-09-28 08:47:27.215725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:49.389 [2024-09-28 08:47:27.216052] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.389 [2024-09-28 08:47:27.216100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.389 [2024-09-28 08:47:27.216407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:49.389 [2024-09-28 08:47:27.216610] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.389 [2024-09-28 08:47:27.216665] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:09:49.389 id_bdev 0x617000008200 00:09:49.389 [2024-09-28 08:47:27.216856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.389 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.390 [ 00:09:49.390 { 00:09:49.390 "name": "NewBaseBdev", 00:09:49.390 "aliases": [ 00:09:49.390 "2190c613-2901-403a-a8a1-86c3599a65ec" 00:09:49.390 ], 00:09:49.390 "product_name": "Malloc disk", 00:09:49.390 "block_size": 512, 00:09:49.390 "num_blocks": 65536, 00:09:49.390 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:49.390 "assigned_rate_limits": { 00:09:49.390 "rw_ios_per_sec": 0, 00:09:49.390 "rw_mbytes_per_sec": 0, 00:09:49.390 "r_mbytes_per_sec": 0, 00:09:49.390 "w_mbytes_per_sec": 0 00:09:49.390 }, 00:09:49.390 "claimed": true, 00:09:49.390 "claim_type": "exclusive_write", 00:09:49.390 "zoned": false, 00:09:49.390 "supported_io_types": { 00:09:49.390 "read": true, 00:09:49.390 "write": true, 00:09:49.390 "unmap": true, 00:09:49.390 "flush": true, 00:09:49.390 "reset": true, 00:09:49.390 "nvme_admin": false, 00:09:49.390 "nvme_io": false, 00:09:49.390 "nvme_io_md": false, 00:09:49.390 "write_zeroes": true, 00:09:49.390 "zcopy": true, 00:09:49.390 "get_zone_info": false, 00:09:49.390 "zone_management": false, 00:09:49.390 "zone_append": false, 00:09:49.390 "compare": false, 00:09:49.390 "compare_and_write": false, 00:09:49.390 "abort": true, 00:09:49.390 "seek_hole": false, 00:09:49.390 "seek_data": false, 00:09:49.390 "copy": true, 00:09:49.390 "nvme_iov_md": false 00:09:49.390 }, 00:09:49.390 "memory_domains": [ 00:09:49.390 { 00:09:49.390 "dma_device_id": "system", 00:09:49.390 "dma_device_type": 1 00:09:49.390 }, 00:09:49.390 { 00:09:49.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.390 "dma_device_type": 2 00:09:49.390 } 00:09:49.390 ], 00:09:49.390 "driver_specific": {} 00:09:49.390 } 00:09:49.390 ] 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.390 "name": "Existed_Raid", 00:09:49.390 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:49.390 "strip_size_kb": 0, 00:09:49.390 "state": "online", 00:09:49.390 "raid_level": "raid1", 00:09:49.390 "superblock": true, 00:09:49.390 "num_base_bdevs": 3, 00:09:49.390 "num_base_bdevs_discovered": 3, 00:09:49.390 "num_base_bdevs_operational": 3, 00:09:49.390 "base_bdevs_list": [ 00:09:49.390 { 00:09:49.390 "name": "NewBaseBdev", 00:09:49.390 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:49.390 "is_configured": true, 00:09:49.390 "data_offset": 2048, 00:09:49.390 "data_size": 63488 00:09:49.390 }, 00:09:49.390 { 00:09:49.390 "name": "BaseBdev2", 00:09:49.390 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:49.390 "is_configured": true, 00:09:49.390 "data_offset": 2048, 00:09:49.390 "data_size": 63488 00:09:49.390 }, 00:09:49.390 { 00:09:49.390 "name": "BaseBdev3", 00:09:49.390 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:49.390 "is_configured": true, 00:09:49.390 "data_offset": 2048, 00:09:49.390 "data_size": 63488 00:09:49.390 } 00:09:49.390 ] 00:09:49.390 }' 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.390 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.674 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.945 [2024-09-28 08:47:27.667344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.945 "name": "Existed_Raid", 00:09:49.945 "aliases": [ 00:09:49.945 "f218d7e6-6948-4b11-8738-8f5386fe825f" 00:09:49.945 ], 00:09:49.945 "product_name": "Raid Volume", 00:09:49.945 "block_size": 512, 00:09:49.945 "num_blocks": 63488, 00:09:49.945 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:49.945 "assigned_rate_limits": { 00:09:49.945 "rw_ios_per_sec": 0, 00:09:49.945 "rw_mbytes_per_sec": 0, 00:09:49.945 "r_mbytes_per_sec": 0, 00:09:49.945 "w_mbytes_per_sec": 0 00:09:49.945 }, 00:09:49.945 "claimed": false, 00:09:49.945 "zoned": false, 00:09:49.945 "supported_io_types": { 00:09:49.945 "read": true, 00:09:49.945 "write": true, 00:09:49.945 "unmap": false, 00:09:49.945 "flush": false, 00:09:49.945 "reset": true, 00:09:49.945 "nvme_admin": false, 00:09:49.945 "nvme_io": false, 00:09:49.945 "nvme_io_md": false, 00:09:49.945 "write_zeroes": true, 00:09:49.945 "zcopy": false, 00:09:49.945 "get_zone_info": false, 00:09:49.945 "zone_management": false, 00:09:49.945 "zone_append": false, 00:09:49.945 "compare": false, 00:09:49.945 "compare_and_write": false, 00:09:49.945 "abort": false, 00:09:49.945 "seek_hole": false, 00:09:49.945 "seek_data": false, 00:09:49.945 "copy": false, 00:09:49.945 "nvme_iov_md": false 00:09:49.945 }, 00:09:49.945 "memory_domains": [ 00:09:49.945 { 00:09:49.945 "dma_device_id": "system", 00:09:49.945 "dma_device_type": 1 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.945 "dma_device_type": 2 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "dma_device_id": "system", 00:09:49.945 "dma_device_type": 1 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.945 "dma_device_type": 2 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "dma_device_id": "system", 00:09:49.945 "dma_device_type": 1 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.945 "dma_device_type": 2 00:09:49.945 } 00:09:49.945 ], 00:09:49.945 "driver_specific": { 00:09:49.945 "raid": { 00:09:49.945 "uuid": "f218d7e6-6948-4b11-8738-8f5386fe825f", 00:09:49.945 "strip_size_kb": 0, 00:09:49.945 "state": "online", 00:09:49.945 "raid_level": "raid1", 00:09:49.945 "superblock": true, 00:09:49.945 "num_base_bdevs": 3, 00:09:49.945 "num_base_bdevs_discovered": 3, 00:09:49.945 "num_base_bdevs_operational": 3, 00:09:49.945 "base_bdevs_list": [ 00:09:49.945 { 00:09:49.945 "name": "NewBaseBdev", 00:09:49.945 "uuid": "2190c613-2901-403a-a8a1-86c3599a65ec", 00:09:49.945 "is_configured": true, 00:09:49.945 "data_offset": 2048, 00:09:49.945 "data_size": 63488 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "name": "BaseBdev2", 00:09:49.945 "uuid": "07f19e6a-4206-40e6-8eab-f6752e16216a", 00:09:49.945 "is_configured": true, 00:09:49.945 "data_offset": 2048, 00:09:49.945 "data_size": 63488 00:09:49.945 }, 00:09:49.945 { 00:09:49.945 "name": "BaseBdev3", 00:09:49.945 "uuid": "d3dc9d7b-b2d3-429a-a3d2-0d8e12b301d4", 00:09:49.945 "is_configured": true, 00:09:49.945 "data_offset": 2048, 00:09:49.945 "data_size": 63488 00:09:49.945 } 00:09:49.945 ] 00:09:49.945 } 00:09:49.945 } 00:09:49.945 }' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:49.945 BaseBdev2 00:09:49.945 BaseBdev3' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.945 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.946 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.946 [2024-09-28 08:47:27.934545] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.946 [2024-09-28 08:47:27.934580] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.946 [2024-09-28 08:47:27.934673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.946 [2024-09-28 08:47:27.935020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.946 [2024-09-28 08:47:27.935032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68034 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68034 ']' 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68034 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68034 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68034' 00:09:50.205 killing process with pid 68034 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68034 00:09:50.205 [2024-09-28 08:47:27.980323] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.205 08:47:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68034 00:09:50.464 [2024-09-28 08:47:28.292358] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.844 08:47:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:51.844 00:09:51.844 real 0m10.595s 00:09:51.844 user 0m16.429s 00:09:51.844 sys 0m2.000s 00:09:51.844 ************************************ 00:09:51.844 END TEST raid_state_function_test_sb 00:09:51.844 ************************************ 00:09:51.844 08:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.844 08:47:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.844 08:47:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:51.844 08:47:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:51.844 08:47:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.844 08:47:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.844 ************************************ 00:09:51.844 START TEST raid_superblock_test 00:09:51.844 ************************************ 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68654 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68654 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68654 ']' 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.844 08:47:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.844 [2024-09-28 08:47:29.789425] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:51.844 [2024-09-28 08:47:29.789549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68654 ] 00:09:52.104 [2024-09-28 08:47:29.957031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.364 [2024-09-28 08:47:30.195958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.623 [2024-09-28 08:47:30.416583] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.623 [2024-09-28 08:47:30.416619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.623 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 malloc1 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 [2024-09-28 08:47:30.657792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.885 [2024-09-28 08:47:30.657914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.885 [2024-09-28 08:47:30.657956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:52.885 [2024-09-28 08:47:30.657990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.885 [2024-09-28 08:47:30.660389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.885 [2024-09-28 08:47:30.660457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.885 pt1 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 malloc2 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 [2024-09-28 08:47:30.748752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.885 [2024-09-28 08:47:30.748807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.885 [2024-09-28 08:47:30.748829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:52.885 [2024-09-28 08:47:30.748838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.885 [2024-09-28 08:47:30.751123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.885 [2024-09-28 08:47:30.751179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.885 pt2 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 malloc3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 [2024-09-28 08:47:30.809467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.885 [2024-09-28 08:47:30.809568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.885 [2024-09-28 08:47:30.809604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:52.885 [2024-09-28 08:47:30.809631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.885 [2024-09-28 08:47:30.812041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.885 [2024-09-28 08:47:30.812126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.885 pt3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 [2024-09-28 08:47:30.821521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:52.885 [2024-09-28 08:47:30.823626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.885 [2024-09-28 08:47:30.823757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.885 [2024-09-28 08:47:30.823928] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:52.885 [2024-09-28 08:47:30.823976] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.885 [2024-09-28 08:47:30.824220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.885 [2024-09-28 08:47:30.824402] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:52.885 [2024-09-28 08:47:30.824413] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:52.885 [2024-09-28 08:47:30.824579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.885 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.885 "name": "raid_bdev1", 00:09:52.885 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:52.885 "strip_size_kb": 0, 00:09:52.885 "state": "online", 00:09:52.885 "raid_level": "raid1", 00:09:52.885 "superblock": true, 00:09:52.885 "num_base_bdevs": 3, 00:09:52.885 "num_base_bdevs_discovered": 3, 00:09:52.885 "num_base_bdevs_operational": 3, 00:09:52.885 "base_bdevs_list": [ 00:09:52.885 { 00:09:52.885 "name": "pt1", 00:09:52.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.885 "is_configured": true, 00:09:52.885 "data_offset": 2048, 00:09:52.885 "data_size": 63488 00:09:52.885 }, 00:09:52.885 { 00:09:52.885 "name": "pt2", 00:09:52.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.885 "is_configured": true, 00:09:52.885 "data_offset": 2048, 00:09:52.885 "data_size": 63488 00:09:52.886 }, 00:09:52.886 { 00:09:52.886 "name": "pt3", 00:09:52.886 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.886 "is_configured": true, 00:09:52.886 "data_offset": 2048, 00:09:52.886 "data_size": 63488 00:09:52.886 } 00:09:52.886 ] 00:09:52.886 }' 00:09:52.886 08:47:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.886 08:47:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.455 [2024-09-28 08:47:31.261050] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.455 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.455 "name": "raid_bdev1", 00:09:53.455 "aliases": [ 00:09:53.455 "f1ee6784-20a4-4ff4-85e7-ba3340149428" 00:09:53.455 ], 00:09:53.455 "product_name": "Raid Volume", 00:09:53.455 "block_size": 512, 00:09:53.455 "num_blocks": 63488, 00:09:53.455 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:53.455 "assigned_rate_limits": { 00:09:53.455 "rw_ios_per_sec": 0, 00:09:53.455 "rw_mbytes_per_sec": 0, 00:09:53.455 "r_mbytes_per_sec": 0, 00:09:53.455 "w_mbytes_per_sec": 0 00:09:53.455 }, 00:09:53.455 "claimed": false, 00:09:53.455 "zoned": false, 00:09:53.455 "supported_io_types": { 00:09:53.455 "read": true, 00:09:53.455 "write": true, 00:09:53.455 "unmap": false, 00:09:53.455 "flush": false, 00:09:53.455 "reset": true, 00:09:53.455 "nvme_admin": false, 00:09:53.455 "nvme_io": false, 00:09:53.455 "nvme_io_md": false, 00:09:53.455 "write_zeroes": true, 00:09:53.455 "zcopy": false, 00:09:53.455 "get_zone_info": false, 00:09:53.455 "zone_management": false, 00:09:53.455 "zone_append": false, 00:09:53.455 "compare": false, 00:09:53.455 "compare_and_write": false, 00:09:53.455 "abort": false, 00:09:53.455 "seek_hole": false, 00:09:53.455 "seek_data": false, 00:09:53.455 "copy": false, 00:09:53.455 "nvme_iov_md": false 00:09:53.455 }, 00:09:53.455 "memory_domains": [ 00:09:53.455 { 00:09:53.455 "dma_device_id": "system", 00:09:53.455 "dma_device_type": 1 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.455 "dma_device_type": 2 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "dma_device_id": "system", 00:09:53.455 "dma_device_type": 1 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.455 "dma_device_type": 2 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "dma_device_id": "system", 00:09:53.455 "dma_device_type": 1 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.455 "dma_device_type": 2 00:09:53.455 } 00:09:53.455 ], 00:09:53.455 "driver_specific": { 00:09:53.455 "raid": { 00:09:53.455 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:53.455 "strip_size_kb": 0, 00:09:53.455 "state": "online", 00:09:53.455 "raid_level": "raid1", 00:09:53.455 "superblock": true, 00:09:53.455 "num_base_bdevs": 3, 00:09:53.455 "num_base_bdevs_discovered": 3, 00:09:53.455 "num_base_bdevs_operational": 3, 00:09:53.455 "base_bdevs_list": [ 00:09:53.455 { 00:09:53.455 "name": "pt1", 00:09:53.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.455 "is_configured": true, 00:09:53.455 "data_offset": 2048, 00:09:53.455 "data_size": 63488 00:09:53.455 }, 00:09:53.455 { 00:09:53.455 "name": "pt2", 00:09:53.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.455 "is_configured": true, 00:09:53.456 "data_offset": 2048, 00:09:53.456 "data_size": 63488 00:09:53.456 }, 00:09:53.456 { 00:09:53.456 "name": "pt3", 00:09:53.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.456 "is_configured": true, 00:09:53.456 "data_offset": 2048, 00:09:53.456 "data_size": 63488 00:09:53.456 } 00:09:53.456 ] 00:09:53.456 } 00:09:53.456 } 00:09:53.456 }' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.456 pt2 00:09:53.456 pt3' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.456 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 [2024-09-28 08:47:31.508540] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f1ee6784-20a4-4ff4-85e7-ba3340149428 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f1ee6784-20a4-4ff4-85e7-ba3340149428 ']' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 [2024-09-28 08:47:31.556202] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.716 [2024-09-28 08:47:31.556265] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.716 [2024-09-28 08:47:31.556351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.716 [2024-09-28 08:47:31.556459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.716 [2024-09-28 08:47:31.556561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.716 [2024-09-28 08:47:31.699983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:53.716 [2024-09-28 08:47:31.702159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:53.716 [2024-09-28 08:47:31.702207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:53.716 [2024-09-28 08:47:31.702255] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:53.716 [2024-09-28 08:47:31.702302] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:53.716 [2024-09-28 08:47:31.702320] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:53.716 [2024-09-28 08:47:31.702335] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.716 [2024-09-28 08:47:31.702345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:53.716 request: 00:09:53.716 { 00:09:53.716 "name": "raid_bdev1", 00:09:53.716 "raid_level": "raid1", 00:09:53.716 "base_bdevs": [ 00:09:53.716 "malloc1", 00:09:53.716 "malloc2", 00:09:53.716 "malloc3" 00:09:53.716 ], 00:09:53.716 "superblock": false, 00:09:53.716 "method": "bdev_raid_create", 00:09:53.716 "req_id": 1 00:09:53.716 } 00:09:53.716 Got JSON-RPC error response 00:09:53.716 response: 00:09:53.716 { 00:09:53.716 "code": -17, 00:09:53.716 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:53.716 } 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.716 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.977 [2024-09-28 08:47:31.767832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:53.977 [2024-09-28 08:47:31.767920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.977 [2024-09-28 08:47:31.767962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:53.977 [2024-09-28 08:47:31.767989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.977 [2024-09-28 08:47:31.770438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.977 [2024-09-28 08:47:31.770502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:53.977 [2024-09-28 08:47:31.770598] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:53.977 [2024-09-28 08:47:31.770673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:53.977 pt1 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.977 "name": "raid_bdev1", 00:09:53.977 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:53.977 "strip_size_kb": 0, 00:09:53.977 "state": "configuring", 00:09:53.977 "raid_level": "raid1", 00:09:53.977 "superblock": true, 00:09:53.977 "num_base_bdevs": 3, 00:09:53.977 "num_base_bdevs_discovered": 1, 00:09:53.977 "num_base_bdevs_operational": 3, 00:09:53.977 "base_bdevs_list": [ 00:09:53.977 { 00:09:53.977 "name": "pt1", 00:09:53.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.977 "is_configured": true, 00:09:53.977 "data_offset": 2048, 00:09:53.977 "data_size": 63488 00:09:53.977 }, 00:09:53.977 { 00:09:53.977 "name": null, 00:09:53.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.977 "is_configured": false, 00:09:53.977 "data_offset": 2048, 00:09:53.977 "data_size": 63488 00:09:53.977 }, 00:09:53.977 { 00:09:53.977 "name": null, 00:09:53.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.977 "is_configured": false, 00:09:53.977 "data_offset": 2048, 00:09:53.977 "data_size": 63488 00:09:53.977 } 00:09:53.977 ] 00:09:53.977 }' 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.977 08:47:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 [2024-09-28 08:47:32.211080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.237 [2024-09-28 08:47:32.211149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.237 [2024-09-28 08:47:32.211174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:54.237 [2024-09-28 08:47:32.211184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.237 [2024-09-28 08:47:32.211629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.237 [2024-09-28 08:47:32.211646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.237 [2024-09-28 08:47:32.211744] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.237 [2024-09-28 08:47:32.211766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.237 pt2 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 [2024-09-28 08:47:32.223069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:54.237 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.238 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.497 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.497 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.497 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.497 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.497 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.498 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.498 "name": "raid_bdev1", 00:09:54.498 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:54.498 "strip_size_kb": 0, 00:09:54.498 "state": "configuring", 00:09:54.498 "raid_level": "raid1", 00:09:54.498 "superblock": true, 00:09:54.498 "num_base_bdevs": 3, 00:09:54.498 "num_base_bdevs_discovered": 1, 00:09:54.498 "num_base_bdevs_operational": 3, 00:09:54.498 "base_bdevs_list": [ 00:09:54.498 { 00:09:54.498 "name": "pt1", 00:09:54.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.498 "is_configured": true, 00:09:54.498 "data_offset": 2048, 00:09:54.498 "data_size": 63488 00:09:54.498 }, 00:09:54.498 { 00:09:54.498 "name": null, 00:09:54.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.498 "is_configured": false, 00:09:54.498 "data_offset": 0, 00:09:54.498 "data_size": 63488 00:09:54.498 }, 00:09:54.498 { 00:09:54.498 "name": null, 00:09:54.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.498 "is_configured": false, 00:09:54.498 "data_offset": 2048, 00:09:54.498 "data_size": 63488 00:09:54.498 } 00:09:54.498 ] 00:09:54.498 }' 00:09:54.498 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.498 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 [2024-09-28 08:47:32.650299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.758 [2024-09-28 08:47:32.650395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.758 [2024-09-28 08:47:32.650428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:54.758 [2024-09-28 08:47:32.650457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.758 [2024-09-28 08:47:32.650913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.758 [2024-09-28 08:47:32.650972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.758 [2024-09-28 08:47:32.651087] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.758 [2024-09-28 08:47:32.651156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.758 pt2 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 [2024-09-28 08:47:32.662301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.758 [2024-09-28 08:47:32.662394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.758 [2024-09-28 08:47:32.662429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:54.758 [2024-09-28 08:47:32.662462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.758 [2024-09-28 08:47:32.662860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.758 [2024-09-28 08:47:32.662918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.758 [2024-09-28 08:47:32.662999] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:54.758 [2024-09-28 08:47:32.663046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.758 [2024-09-28 08:47:32.663195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.758 [2024-09-28 08:47:32.663236] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.758 [2024-09-28 08:47:32.663503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:54.758 [2024-09-28 08:47:32.663729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.758 [2024-09-28 08:47:32.663771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:54.758 [2024-09-28 08:47:32.663943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.758 pt3 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.758 "name": "raid_bdev1", 00:09:54.758 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:54.758 "strip_size_kb": 0, 00:09:54.758 "state": "online", 00:09:54.758 "raid_level": "raid1", 00:09:54.758 "superblock": true, 00:09:54.758 "num_base_bdevs": 3, 00:09:54.758 "num_base_bdevs_discovered": 3, 00:09:54.758 "num_base_bdevs_operational": 3, 00:09:54.758 "base_bdevs_list": [ 00:09:54.758 { 00:09:54.758 "name": "pt1", 00:09:54.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.758 "is_configured": true, 00:09:54.758 "data_offset": 2048, 00:09:54.758 "data_size": 63488 00:09:54.758 }, 00:09:54.758 { 00:09:54.758 "name": "pt2", 00:09:54.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.758 "is_configured": true, 00:09:54.758 "data_offset": 2048, 00:09:54.758 "data_size": 63488 00:09:54.758 }, 00:09:54.758 { 00:09:54.758 "name": "pt3", 00:09:54.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.758 "is_configured": true, 00:09:54.758 "data_offset": 2048, 00:09:54.758 "data_size": 63488 00:09:54.758 } 00:09:54.758 ] 00:09:54.758 }' 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.758 08:47:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.328 [2024-09-28 08:47:33.117874] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.328 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.328 "name": "raid_bdev1", 00:09:55.328 "aliases": [ 00:09:55.328 "f1ee6784-20a4-4ff4-85e7-ba3340149428" 00:09:55.328 ], 00:09:55.328 "product_name": "Raid Volume", 00:09:55.328 "block_size": 512, 00:09:55.328 "num_blocks": 63488, 00:09:55.328 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:55.328 "assigned_rate_limits": { 00:09:55.328 "rw_ios_per_sec": 0, 00:09:55.328 "rw_mbytes_per_sec": 0, 00:09:55.328 "r_mbytes_per_sec": 0, 00:09:55.328 "w_mbytes_per_sec": 0 00:09:55.328 }, 00:09:55.328 "claimed": false, 00:09:55.328 "zoned": false, 00:09:55.328 "supported_io_types": { 00:09:55.328 "read": true, 00:09:55.328 "write": true, 00:09:55.328 "unmap": false, 00:09:55.328 "flush": false, 00:09:55.328 "reset": true, 00:09:55.328 "nvme_admin": false, 00:09:55.328 "nvme_io": false, 00:09:55.328 "nvme_io_md": false, 00:09:55.328 "write_zeroes": true, 00:09:55.328 "zcopy": false, 00:09:55.328 "get_zone_info": false, 00:09:55.328 "zone_management": false, 00:09:55.328 "zone_append": false, 00:09:55.328 "compare": false, 00:09:55.328 "compare_and_write": false, 00:09:55.328 "abort": false, 00:09:55.328 "seek_hole": false, 00:09:55.328 "seek_data": false, 00:09:55.328 "copy": false, 00:09:55.328 "nvme_iov_md": false 00:09:55.328 }, 00:09:55.328 "memory_domains": [ 00:09:55.328 { 00:09:55.329 "dma_device_id": "system", 00:09:55.329 "dma_device_type": 1 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.329 "dma_device_type": 2 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "dma_device_id": "system", 00:09:55.329 "dma_device_type": 1 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.329 "dma_device_type": 2 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "dma_device_id": "system", 00:09:55.329 "dma_device_type": 1 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.329 "dma_device_type": 2 00:09:55.329 } 00:09:55.329 ], 00:09:55.329 "driver_specific": { 00:09:55.329 "raid": { 00:09:55.329 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:55.329 "strip_size_kb": 0, 00:09:55.329 "state": "online", 00:09:55.329 "raid_level": "raid1", 00:09:55.329 "superblock": true, 00:09:55.329 "num_base_bdevs": 3, 00:09:55.329 "num_base_bdevs_discovered": 3, 00:09:55.329 "num_base_bdevs_operational": 3, 00:09:55.329 "base_bdevs_list": [ 00:09:55.329 { 00:09:55.329 "name": "pt1", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "name": "pt2", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "name": "pt3", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 } 00:09:55.329 ] 00:09:55.329 } 00:09:55.329 } 00:09:55.329 }' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.329 pt2 00:09:55.329 pt3' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.329 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.589 [2024-09-28 08:47:33.357358] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f1ee6784-20a4-4ff4-85e7-ba3340149428 '!=' f1ee6784-20a4-4ff4-85e7-ba3340149428 ']' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.589 [2024-09-28 08:47:33.405069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.589 "name": "raid_bdev1", 00:09:55.589 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:55.589 "strip_size_kb": 0, 00:09:55.589 "state": "online", 00:09:55.589 "raid_level": "raid1", 00:09:55.589 "superblock": true, 00:09:55.589 "num_base_bdevs": 3, 00:09:55.589 "num_base_bdevs_discovered": 2, 00:09:55.589 "num_base_bdevs_operational": 2, 00:09:55.589 "base_bdevs_list": [ 00:09:55.589 { 00:09:55.589 "name": null, 00:09:55.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.589 "is_configured": false, 00:09:55.589 "data_offset": 0, 00:09:55.589 "data_size": 63488 00:09:55.589 }, 00:09:55.589 { 00:09:55.589 "name": "pt2", 00:09:55.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.589 "is_configured": true, 00:09:55.589 "data_offset": 2048, 00:09:55.589 "data_size": 63488 00:09:55.589 }, 00:09:55.589 { 00:09:55.589 "name": "pt3", 00:09:55.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.589 "is_configured": true, 00:09:55.589 "data_offset": 2048, 00:09:55.589 "data_size": 63488 00:09:55.589 } 00:09:55.589 ] 00:09:55.589 }' 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.589 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.849 [2024-09-28 08:47:33.824351] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.849 [2024-09-28 08:47:33.824435] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.849 [2024-09-28 08:47:33.824533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.849 [2024-09-28 08:47:33.824620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.849 [2024-09-28 08:47:33.824678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.849 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.109 [2024-09-28 08:47:33.900190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.109 [2024-09-28 08:47:33.900245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.109 [2024-09-28 08:47:33.900263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:56.109 [2024-09-28 08:47:33.900274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.109 [2024-09-28 08:47:33.902755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.109 [2024-09-28 08:47:33.902790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.109 [2024-09-28 08:47:33.902868] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:56.109 [2024-09-28 08:47:33.902919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.109 pt2 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.109 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.109 "name": "raid_bdev1", 00:09:56.109 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:56.109 "strip_size_kb": 0, 00:09:56.109 "state": "configuring", 00:09:56.109 "raid_level": "raid1", 00:09:56.109 "superblock": true, 00:09:56.109 "num_base_bdevs": 3, 00:09:56.109 "num_base_bdevs_discovered": 1, 00:09:56.109 "num_base_bdevs_operational": 2, 00:09:56.109 "base_bdevs_list": [ 00:09:56.109 { 00:09:56.109 "name": null, 00:09:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.109 "is_configured": false, 00:09:56.110 "data_offset": 2048, 00:09:56.110 "data_size": 63488 00:09:56.110 }, 00:09:56.110 { 00:09:56.110 "name": "pt2", 00:09:56.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.110 "is_configured": true, 00:09:56.110 "data_offset": 2048, 00:09:56.110 "data_size": 63488 00:09:56.110 }, 00:09:56.110 { 00:09:56.110 "name": null, 00:09:56.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.110 "is_configured": false, 00:09:56.110 "data_offset": 2048, 00:09:56.110 "data_size": 63488 00:09:56.110 } 00:09:56.110 ] 00:09:56.110 }' 00:09:56.110 08:47:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.110 08:47:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.369 [2024-09-28 08:47:34.343457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.369 [2024-09-28 08:47:34.343555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.369 [2024-09-28 08:47:34.343593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:56.369 [2024-09-28 08:47:34.343623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.369 [2024-09-28 08:47:34.344129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.369 [2024-09-28 08:47:34.344187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.369 [2024-09-28 08:47:34.344290] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:56.369 [2024-09-28 08:47:34.344345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.369 [2024-09-28 08:47:34.344496] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.369 [2024-09-28 08:47:34.344536] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.369 [2024-09-28 08:47:34.344818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:56.369 [2024-09-28 08:47:34.345015] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.369 [2024-09-28 08:47:34.345051] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.369 [2024-09-28 08:47:34.345241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.369 pt3 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.369 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.370 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.629 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.629 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.629 "name": "raid_bdev1", 00:09:56.629 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:56.629 "strip_size_kb": 0, 00:09:56.629 "state": "online", 00:09:56.629 "raid_level": "raid1", 00:09:56.629 "superblock": true, 00:09:56.629 "num_base_bdevs": 3, 00:09:56.629 "num_base_bdevs_discovered": 2, 00:09:56.629 "num_base_bdevs_operational": 2, 00:09:56.629 "base_bdevs_list": [ 00:09:56.629 { 00:09:56.629 "name": null, 00:09:56.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.629 "is_configured": false, 00:09:56.629 "data_offset": 2048, 00:09:56.629 "data_size": 63488 00:09:56.629 }, 00:09:56.629 { 00:09:56.629 "name": "pt2", 00:09:56.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.629 "is_configured": true, 00:09:56.629 "data_offset": 2048, 00:09:56.629 "data_size": 63488 00:09:56.629 }, 00:09:56.629 { 00:09:56.629 "name": "pt3", 00:09:56.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.629 "is_configured": true, 00:09:56.629 "data_offset": 2048, 00:09:56.629 "data_size": 63488 00:09:56.630 } 00:09:56.630 ] 00:09:56.630 }' 00:09:56.630 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.630 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 [2024-09-28 08:47:34.766753] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.890 [2024-09-28 08:47:34.766782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.890 [2024-09-28 08:47:34.766845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.890 [2024-09-28 08:47:34.766902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.890 [2024-09-28 08:47:34.766911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 [2024-09-28 08:47:34.842642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:56.890 [2024-09-28 08:47:34.842721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.890 [2024-09-28 08:47:34.842743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:56.890 [2024-09-28 08:47:34.842762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.890 [2024-09-28 08:47:34.845244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.890 [2024-09-28 08:47:34.845278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:56.890 [2024-09-28 08:47:34.845347] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:56.890 [2024-09-28 08:47:34.845388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:56.890 [2024-09-28 08:47:34.845498] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:56.890 [2024-09-28 08:47:34.845510] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.890 [2024-09-28 08:47:34.845526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:56.890 [2024-09-28 08:47:34.845586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.890 pt1 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.890 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.150 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.150 "name": "raid_bdev1", 00:09:57.150 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:57.150 "strip_size_kb": 0, 00:09:57.150 "state": "configuring", 00:09:57.150 "raid_level": "raid1", 00:09:57.150 "superblock": true, 00:09:57.150 "num_base_bdevs": 3, 00:09:57.150 "num_base_bdevs_discovered": 1, 00:09:57.150 "num_base_bdevs_operational": 2, 00:09:57.150 "base_bdevs_list": [ 00:09:57.150 { 00:09:57.150 "name": null, 00:09:57.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.150 "is_configured": false, 00:09:57.150 "data_offset": 2048, 00:09:57.150 "data_size": 63488 00:09:57.150 }, 00:09:57.150 { 00:09:57.150 "name": "pt2", 00:09:57.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.150 "is_configured": true, 00:09:57.150 "data_offset": 2048, 00:09:57.150 "data_size": 63488 00:09:57.150 }, 00:09:57.150 { 00:09:57.150 "name": null, 00:09:57.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.150 "is_configured": false, 00:09:57.150 "data_offset": 2048, 00:09:57.150 "data_size": 63488 00:09:57.150 } 00:09:57.150 ] 00:09:57.150 }' 00:09:57.150 08:47:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.150 08:47:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.410 [2024-09-28 08:47:35.373756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.410 [2024-09-28 08:47:35.373815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.410 [2024-09-28 08:47:35.373836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:57.410 [2024-09-28 08:47:35.373845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.410 [2024-09-28 08:47:35.374313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.410 [2024-09-28 08:47:35.374330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.410 [2024-09-28 08:47:35.374411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:57.410 [2024-09-28 08:47:35.374456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.410 [2024-09-28 08:47:35.374591] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:57.410 [2024-09-28 08:47:35.374599] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.410 [2024-09-28 08:47:35.374916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:57.410 [2024-09-28 08:47:35.375085] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:57.410 [2024-09-28 08:47:35.375106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:57.410 [2024-09-28 08:47:35.375260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.410 pt3 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.410 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.670 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.670 "name": "raid_bdev1", 00:09:57.670 "uuid": "f1ee6784-20a4-4ff4-85e7-ba3340149428", 00:09:57.670 "strip_size_kb": 0, 00:09:57.670 "state": "online", 00:09:57.670 "raid_level": "raid1", 00:09:57.670 "superblock": true, 00:09:57.670 "num_base_bdevs": 3, 00:09:57.670 "num_base_bdevs_discovered": 2, 00:09:57.670 "num_base_bdevs_operational": 2, 00:09:57.670 "base_bdevs_list": [ 00:09:57.670 { 00:09:57.670 "name": null, 00:09:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.670 "is_configured": false, 00:09:57.670 "data_offset": 2048, 00:09:57.670 "data_size": 63488 00:09:57.670 }, 00:09:57.670 { 00:09:57.670 "name": "pt2", 00:09:57.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.670 "is_configured": true, 00:09:57.670 "data_offset": 2048, 00:09:57.670 "data_size": 63488 00:09:57.670 }, 00:09:57.670 { 00:09:57.670 "name": "pt3", 00:09:57.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.670 "is_configured": true, 00:09:57.670 "data_offset": 2048, 00:09:57.670 "data_size": 63488 00:09:57.670 } 00:09:57.670 ] 00:09:57.670 }' 00:09:57.670 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.670 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:57.930 [2024-09-28 08:47:35.869133] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f1ee6784-20a4-4ff4-85e7-ba3340149428 '!=' f1ee6784-20a4-4ff4-85e7-ba3340149428 ']' 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68654 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68654 ']' 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68654 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.930 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68654 00:09:58.189 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.190 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.190 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68654' 00:09:58.190 killing process with pid 68654 00:09:58.190 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68654 00:09:58.190 [2024-09-28 08:47:35.933043] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.190 [2024-09-28 08:47:35.933177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.190 08:47:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68654 00:09:58.190 [2024-09-28 08:47:35.933263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.190 [2024-09-28 08:47:35.933277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:58.450 [2024-09-28 08:47:36.255701] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.832 08:47:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:59.832 00:09:59.832 real 0m7.879s 00:09:59.832 user 0m12.068s 00:09:59.832 sys 0m1.478s 00:09:59.832 08:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.832 ************************************ 00:09:59.832 END TEST raid_superblock_test 00:09:59.832 ************************************ 00:09:59.832 08:47:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.832 08:47:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:59.832 08:47:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:59.832 08:47:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.832 08:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.832 ************************************ 00:09:59.832 START TEST raid_read_error_test 00:09:59.832 ************************************ 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.832 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.35RHnTQqSU 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69100 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69100 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69100 ']' 00:09:59.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.833 08:47:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.833 [2024-09-28 08:47:37.759463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:59.833 [2024-09-28 08:47:37.759592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69100 ] 00:10:00.093 [2024-09-28 08:47:37.928937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.353 [2024-09-28 08:47:38.172866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.613 [2024-09-28 08:47:38.405193] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.613 [2024-09-28 08:47:38.405229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.613 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.873 BaseBdev1_malloc 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.873 true 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.873 [2024-09-28 08:47:38.643198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.873 [2024-09-28 08:47:38.643315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.873 [2024-09-28 08:47:38.643338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:00.873 [2024-09-28 08:47:38.643350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.873 [2024-09-28 08:47:38.645623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.873 [2024-09-28 08:47:38.645678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.873 BaseBdev1 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.873 BaseBdev2_malloc 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.873 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.873 true 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 [2024-09-28 08:47:38.725758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:00.874 [2024-09-28 08:47:38.725810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.874 [2024-09-28 08:47:38.725825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:00.874 [2024-09-28 08:47:38.725835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.874 [2024-09-28 08:47:38.728177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.874 [2024-09-28 08:47:38.728216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:00.874 BaseBdev2 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 BaseBdev3_malloc 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 true 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 [2024-09-28 08:47:38.797903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:00.874 [2024-09-28 08:47:38.797951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.874 [2024-09-28 08:47:38.797967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:00.874 [2024-09-28 08:47:38.797977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.874 [2024-09-28 08:47:38.800355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.874 [2024-09-28 08:47:38.800394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:00.874 BaseBdev3 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 [2024-09-28 08:47:38.809963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.874 [2024-09-28 08:47:38.812046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.874 [2024-09-28 08:47:38.812118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.874 [2024-09-28 08:47:38.812321] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:00.874 [2024-09-28 08:47:38.812333] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.874 [2024-09-28 08:47:38.812574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.874 [2024-09-28 08:47:38.812780] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:00.874 [2024-09-28 08:47:38.812796] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:00.874 [2024-09-28 08:47:38.812937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.874 "name": "raid_bdev1", 00:10:00.874 "uuid": "0a7c5538-2c76-49c7-b2d1-9e7eb8ba420d", 00:10:00.874 "strip_size_kb": 0, 00:10:00.874 "state": "online", 00:10:00.874 "raid_level": "raid1", 00:10:00.874 "superblock": true, 00:10:00.874 "num_base_bdevs": 3, 00:10:00.874 "num_base_bdevs_discovered": 3, 00:10:00.874 "num_base_bdevs_operational": 3, 00:10:00.874 "base_bdevs_list": [ 00:10:00.874 { 00:10:00.874 "name": "BaseBdev1", 00:10:00.874 "uuid": "0ec14e03-1b9c-59b4-95f1-a5d35edf21d8", 00:10:00.874 "is_configured": true, 00:10:00.874 "data_offset": 2048, 00:10:00.874 "data_size": 63488 00:10:00.874 }, 00:10:00.874 { 00:10:00.874 "name": "BaseBdev2", 00:10:00.874 "uuid": "f402ed28-704b-5ceb-b2b8-a79bf3dda83b", 00:10:00.874 "is_configured": true, 00:10:00.874 "data_offset": 2048, 00:10:00.874 "data_size": 63488 00:10:00.874 }, 00:10:00.874 { 00:10:00.874 "name": "BaseBdev3", 00:10:00.874 "uuid": "d9c634d9-bd6e-5f18-b450-75e6adf0d9d5", 00:10:00.874 "is_configured": true, 00:10:00.874 "data_offset": 2048, 00:10:00.874 "data_size": 63488 00:10:00.874 } 00:10:00.874 ] 00:10:00.874 }' 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.874 08:47:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.444 08:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.444 08:47:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.444 [2024-09-28 08:47:39.330642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.384 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.385 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.385 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.385 "name": "raid_bdev1", 00:10:02.385 "uuid": "0a7c5538-2c76-49c7-b2d1-9e7eb8ba420d", 00:10:02.385 "strip_size_kb": 0, 00:10:02.385 "state": "online", 00:10:02.385 "raid_level": "raid1", 00:10:02.385 "superblock": true, 00:10:02.385 "num_base_bdevs": 3, 00:10:02.385 "num_base_bdevs_discovered": 3, 00:10:02.385 "num_base_bdevs_operational": 3, 00:10:02.385 "base_bdevs_list": [ 00:10:02.385 { 00:10:02.385 "name": "BaseBdev1", 00:10:02.385 "uuid": "0ec14e03-1b9c-59b4-95f1-a5d35edf21d8", 00:10:02.385 "is_configured": true, 00:10:02.385 "data_offset": 2048, 00:10:02.385 "data_size": 63488 00:10:02.385 }, 00:10:02.385 { 00:10:02.385 "name": "BaseBdev2", 00:10:02.385 "uuid": "f402ed28-704b-5ceb-b2b8-a79bf3dda83b", 00:10:02.385 "is_configured": true, 00:10:02.385 "data_offset": 2048, 00:10:02.385 "data_size": 63488 00:10:02.385 }, 00:10:02.385 { 00:10:02.385 "name": "BaseBdev3", 00:10:02.385 "uuid": "d9c634d9-bd6e-5f18-b450-75e6adf0d9d5", 00:10:02.385 "is_configured": true, 00:10:02.385 "data_offset": 2048, 00:10:02.385 "data_size": 63488 00:10:02.385 } 00:10:02.385 ] 00:10:02.385 }' 00:10:02.385 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.385 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.955 [2024-09-28 08:47:40.700083] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.955 [2024-09-28 08:47:40.700181] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.955 [2024-09-28 08:47:40.702779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.955 [2024-09-28 08:47:40.702864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.955 [2024-09-28 08:47:40.702994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.955 [2024-09-28 08:47:40.703058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:02.955 { 00:10:02.955 "results": [ 00:10:02.955 { 00:10:02.955 "job": "raid_bdev1", 00:10:02.955 "core_mask": "0x1", 00:10:02.955 "workload": "randrw", 00:10:02.955 "percentage": 50, 00:10:02.955 "status": "finished", 00:10:02.955 "queue_depth": 1, 00:10:02.955 "io_size": 131072, 00:10:02.955 "runtime": 1.370042, 00:10:02.955 "iops": 10666.096367848577, 00:10:02.955 "mibps": 1333.262045981072, 00:10:02.955 "io_failed": 0, 00:10:02.955 "io_timeout": 0, 00:10:02.955 "avg_latency_us": 91.32595400936596, 00:10:02.955 "min_latency_us": 22.358078602620086, 00:10:02.955 "max_latency_us": 1488.1537117903931 00:10:02.955 } 00:10:02.955 ], 00:10:02.955 "core_count": 1 00:10:02.955 } 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69100 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69100 ']' 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69100 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69100 00:10:02.955 killing process with pid 69100 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69100' 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69100 00:10:02.955 [2024-09-28 08:47:40.737338] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.955 08:47:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69100 00:10:03.214 [2024-09-28 08:47:40.978376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.35RHnTQqSU 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:04.595 ************************************ 00:10:04.595 END TEST raid_read_error_test 00:10:04.595 ************************************ 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:04.595 00:10:04.595 real 0m4.720s 00:10:04.595 user 0m5.430s 00:10:04.595 sys 0m0.668s 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.595 08:47:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.595 08:47:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:04.595 08:47:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.595 08:47:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.595 08:47:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.595 ************************************ 00:10:04.595 START TEST raid_write_error_test 00:10:04.595 ************************************ 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.595 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FpG8qcsH23 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69246 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69246 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69246 ']' 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.596 08:47:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.596 [2024-09-28 08:47:42.555190] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:04.596 [2024-09-28 08:47:42.555378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69246 ] 00:10:04.856 [2024-09-28 08:47:42.722054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.115 [2024-09-28 08:47:42.960743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.375 [2024-09-28 08:47:43.185063] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.375 [2024-09-28 08:47:43.185102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 BaseBdev1_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 true 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 [2024-09-28 08:47:43.447529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:05.635 [2024-09-28 08:47:43.447584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.635 [2024-09-28 08:47:43.447602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:05.635 [2024-09-28 08:47:43.447613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.635 [2024-09-28 08:47:43.450034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.635 [2024-09-28 08:47:43.450071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:05.635 BaseBdev1 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 BaseBdev2_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 true 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 [2024-09-28 08:47:43.550544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.635 [2024-09-28 08:47:43.550594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.635 [2024-09-28 08:47:43.550611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.635 [2024-09-28 08:47:43.550621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.635 [2024-09-28 08:47:43.552963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.635 [2024-09-28 08:47:43.553041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.635 BaseBdev2 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 BaseBdev3_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 true 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.635 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.635 [2024-09-28 08:47:43.622977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:05.635 [2024-09-28 08:47:43.623025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.635 [2024-09-28 08:47:43.623042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:05.635 [2024-09-28 08:47:43.623053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.635 [2024-09-28 08:47:43.625480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.635 [2024-09-28 08:47:43.625518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:05.895 BaseBdev3 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.895 [2024-09-28 08:47:43.635033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.895 [2024-09-28 08:47:43.637150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.895 [2024-09-28 08:47:43.637228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.895 [2024-09-28 08:47:43.637442] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.895 [2024-09-28 08:47:43.637454] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.895 [2024-09-28 08:47:43.637702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:05.895 [2024-09-28 08:47:43.637880] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.895 [2024-09-28 08:47:43.637894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:05.895 [2024-09-28 08:47:43.638039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.895 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.896 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.896 "name": "raid_bdev1", 00:10:05.896 "uuid": "a4ce859d-7e7b-496b-b776-5ec2cac91b14", 00:10:05.896 "strip_size_kb": 0, 00:10:05.896 "state": "online", 00:10:05.896 "raid_level": "raid1", 00:10:05.896 "superblock": true, 00:10:05.896 "num_base_bdevs": 3, 00:10:05.896 "num_base_bdevs_discovered": 3, 00:10:05.896 "num_base_bdevs_operational": 3, 00:10:05.896 "base_bdevs_list": [ 00:10:05.896 { 00:10:05.896 "name": "BaseBdev1", 00:10:05.896 "uuid": "fb3667e5-30e1-558b-8d92-b945688d4972", 00:10:05.896 "is_configured": true, 00:10:05.896 "data_offset": 2048, 00:10:05.896 "data_size": 63488 00:10:05.896 }, 00:10:05.896 { 00:10:05.896 "name": "BaseBdev2", 00:10:05.896 "uuid": "f8a96a8f-d7c4-57ec-899d-b42f01fd57db", 00:10:05.896 "is_configured": true, 00:10:05.896 "data_offset": 2048, 00:10:05.896 "data_size": 63488 00:10:05.896 }, 00:10:05.896 { 00:10:05.896 "name": "BaseBdev3", 00:10:05.896 "uuid": "1ffb8662-2c24-5d02-9cd5-639207c64f5f", 00:10:05.896 "is_configured": true, 00:10:05.896 "data_offset": 2048, 00:10:05.896 "data_size": 63488 00:10:05.896 } 00:10:05.896 ] 00:10:05.896 }' 00:10:05.896 08:47:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.896 08:47:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.155 08:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:06.155 08:47:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:06.414 [2024-09-28 08:47:44.191468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.354 [2024-09-28 08:47:45.106214] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:07.354 [2024-09-28 08:47:45.106351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.354 [2024-09-28 08:47:45.106591] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:07.354 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.355 "name": "raid_bdev1", 00:10:07.355 "uuid": "a4ce859d-7e7b-496b-b776-5ec2cac91b14", 00:10:07.355 "strip_size_kb": 0, 00:10:07.355 "state": "online", 00:10:07.355 "raid_level": "raid1", 00:10:07.355 "superblock": true, 00:10:07.355 "num_base_bdevs": 3, 00:10:07.355 "num_base_bdevs_discovered": 2, 00:10:07.355 "num_base_bdevs_operational": 2, 00:10:07.355 "base_bdevs_list": [ 00:10:07.355 { 00:10:07.355 "name": null, 00:10:07.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.355 "is_configured": false, 00:10:07.355 "data_offset": 0, 00:10:07.355 "data_size": 63488 00:10:07.355 }, 00:10:07.355 { 00:10:07.355 "name": "BaseBdev2", 00:10:07.355 "uuid": "f8a96a8f-d7c4-57ec-899d-b42f01fd57db", 00:10:07.355 "is_configured": true, 00:10:07.355 "data_offset": 2048, 00:10:07.355 "data_size": 63488 00:10:07.355 }, 00:10:07.355 { 00:10:07.355 "name": "BaseBdev3", 00:10:07.355 "uuid": "1ffb8662-2c24-5d02-9cd5-639207c64f5f", 00:10:07.355 "is_configured": true, 00:10:07.355 "data_offset": 2048, 00:10:07.355 "data_size": 63488 00:10:07.355 } 00:10:07.355 ] 00:10:07.355 }' 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.355 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.615 [2024-09-28 08:47:45.549457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.615 [2024-09-28 08:47:45.549572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.615 [2024-09-28 08:47:45.552205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.615 [2024-09-28 08:47:45.552297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.615 [2024-09-28 08:47:45.552402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.615 [2024-09-28 08:47:45.552478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:07.615 { 00:10:07.615 "results": [ 00:10:07.615 { 00:10:07.615 "job": "raid_bdev1", 00:10:07.615 "core_mask": "0x1", 00:10:07.615 "workload": "randrw", 00:10:07.615 "percentage": 50, 00:10:07.615 "status": "finished", 00:10:07.615 "queue_depth": 1, 00:10:07.615 "io_size": 131072, 00:10:07.615 "runtime": 1.358676, 00:10:07.615 "iops": 12086.766822995327, 00:10:07.615 "mibps": 1510.8458528744159, 00:10:07.615 "io_failed": 0, 00:10:07.615 "io_timeout": 0, 00:10:07.615 "avg_latency_us": 80.26767372982988, 00:10:07.615 "min_latency_us": 21.799126637554586, 00:10:07.615 "max_latency_us": 1359.3711790393013 00:10:07.615 } 00:10:07.615 ], 00:10:07.615 "core_count": 1 00:10:07.615 } 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69246 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69246 ']' 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69246 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69246 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69246' 00:10:07.615 killing process with pid 69246 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69246 00:10:07.615 [2024-09-28 08:47:45.600355] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.615 08:47:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69246 00:10:07.875 [2024-09-28 08:47:45.842695] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FpG8qcsH23 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:09.305 ************************************ 00:10:09.305 END TEST raid_write_error_test 00:10:09.305 ************************************ 00:10:09.305 00:10:09.305 real 0m4.796s 00:10:09.305 user 0m5.511s 00:10:09.305 sys 0m0.669s 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.305 08:47:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.305 08:47:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:09.305 08:47:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:09.305 08:47:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:09.305 08:47:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:09.305 08:47:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.305 08:47:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.565 ************************************ 00:10:09.565 START TEST raid_state_function_test 00:10:09.565 ************************************ 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.566 Process raid pid: 69389 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69389 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69389' 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69389 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69389 ']' 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.566 08:47:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.566 [2024-09-28 08:47:47.413977] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:09.566 [2024-09-28 08:47:47.414206] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.826 [2024-09-28 08:47:47.582839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.086 [2024-09-28 08:47:47.828047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.086 [2024-09-28 08:47:48.069118] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.086 [2024-09-28 08:47:48.069217] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.344 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.344 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:10.344 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.344 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.344 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.345 [2024-09-28 08:47:48.245707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.345 [2024-09-28 08:47:48.245831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.345 [2024-09-28 08:47:48.245863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.345 [2024-09-28 08:47:48.245886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.345 [2024-09-28 08:47:48.245904] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.345 [2024-09-28 08:47:48.245925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.345 [2024-09-28 08:47:48.245942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.345 [2024-09-28 08:47:48.245987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.345 "name": "Existed_Raid", 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.345 "strip_size_kb": 64, 00:10:10.345 "state": "configuring", 00:10:10.345 "raid_level": "raid0", 00:10:10.345 "superblock": false, 00:10:10.345 "num_base_bdevs": 4, 00:10:10.345 "num_base_bdevs_discovered": 0, 00:10:10.345 "num_base_bdevs_operational": 4, 00:10:10.345 "base_bdevs_list": [ 00:10:10.345 { 00:10:10.345 "name": "BaseBdev1", 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.345 "is_configured": false, 00:10:10.345 "data_offset": 0, 00:10:10.345 "data_size": 0 00:10:10.345 }, 00:10:10.345 { 00:10:10.345 "name": "BaseBdev2", 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.345 "is_configured": false, 00:10:10.345 "data_offset": 0, 00:10:10.345 "data_size": 0 00:10:10.345 }, 00:10:10.345 { 00:10:10.345 "name": "BaseBdev3", 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.345 "is_configured": false, 00:10:10.345 "data_offset": 0, 00:10:10.345 "data_size": 0 00:10:10.345 }, 00:10:10.345 { 00:10:10.345 "name": "BaseBdev4", 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.345 "is_configured": false, 00:10:10.345 "data_offset": 0, 00:10:10.345 "data_size": 0 00:10:10.345 } 00:10:10.345 ] 00:10:10.345 }' 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.345 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 [2024-09-28 08:47:48.708805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.914 [2024-09-28 08:47:48.708890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 [2024-09-28 08:47:48.720823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.914 [2024-09-28 08:47:48.720914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.914 [2024-09-28 08:47:48.720941] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.914 [2024-09-28 08:47:48.720963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.914 [2024-09-28 08:47:48.720981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.914 [2024-09-28 08:47:48.721002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.914 [2024-09-28 08:47:48.721020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.914 [2024-09-28 08:47:48.721041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 [2024-09-28 08:47:48.807305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.914 BaseBdev1 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.914 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.914 [ 00:10:10.914 { 00:10:10.914 "name": "BaseBdev1", 00:10:10.914 "aliases": [ 00:10:10.914 "0b14e67f-dc52-4420-8026-a5773050536b" 00:10:10.914 ], 00:10:10.914 "product_name": "Malloc disk", 00:10:10.914 "block_size": 512, 00:10:10.914 "num_blocks": 65536, 00:10:10.914 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:10.914 "assigned_rate_limits": { 00:10:10.914 "rw_ios_per_sec": 0, 00:10:10.914 "rw_mbytes_per_sec": 0, 00:10:10.914 "r_mbytes_per_sec": 0, 00:10:10.914 "w_mbytes_per_sec": 0 00:10:10.914 }, 00:10:10.914 "claimed": true, 00:10:10.915 "claim_type": "exclusive_write", 00:10:10.915 "zoned": false, 00:10:10.915 "supported_io_types": { 00:10:10.915 "read": true, 00:10:10.915 "write": true, 00:10:10.915 "unmap": true, 00:10:10.915 "flush": true, 00:10:10.915 "reset": true, 00:10:10.915 "nvme_admin": false, 00:10:10.915 "nvme_io": false, 00:10:10.915 "nvme_io_md": false, 00:10:10.915 "write_zeroes": true, 00:10:10.915 "zcopy": true, 00:10:10.915 "get_zone_info": false, 00:10:10.915 "zone_management": false, 00:10:10.915 "zone_append": false, 00:10:10.915 "compare": false, 00:10:10.915 "compare_and_write": false, 00:10:10.915 "abort": true, 00:10:10.915 "seek_hole": false, 00:10:10.915 "seek_data": false, 00:10:10.915 "copy": true, 00:10:10.915 "nvme_iov_md": false 00:10:10.915 }, 00:10:10.915 "memory_domains": [ 00:10:10.915 { 00:10:10.915 "dma_device_id": "system", 00:10:10.915 "dma_device_type": 1 00:10:10.915 }, 00:10:10.915 { 00:10:10.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.915 "dma_device_type": 2 00:10:10.915 } 00:10:10.915 ], 00:10:10.915 "driver_specific": {} 00:10:10.915 } 00:10:10.915 ] 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.915 "name": "Existed_Raid", 00:10:10.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.915 "strip_size_kb": 64, 00:10:10.915 "state": "configuring", 00:10:10.915 "raid_level": "raid0", 00:10:10.915 "superblock": false, 00:10:10.915 "num_base_bdevs": 4, 00:10:10.915 "num_base_bdevs_discovered": 1, 00:10:10.915 "num_base_bdevs_operational": 4, 00:10:10.915 "base_bdevs_list": [ 00:10:10.915 { 00:10:10.915 "name": "BaseBdev1", 00:10:10.915 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:10.915 "is_configured": true, 00:10:10.915 "data_offset": 0, 00:10:10.915 "data_size": 65536 00:10:10.915 }, 00:10:10.915 { 00:10:10.915 "name": "BaseBdev2", 00:10:10.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.915 "is_configured": false, 00:10:10.915 "data_offset": 0, 00:10:10.915 "data_size": 0 00:10:10.915 }, 00:10:10.915 { 00:10:10.915 "name": "BaseBdev3", 00:10:10.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.915 "is_configured": false, 00:10:10.915 "data_offset": 0, 00:10:10.915 "data_size": 0 00:10:10.915 }, 00:10:10.915 { 00:10:10.915 "name": "BaseBdev4", 00:10:10.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.915 "is_configured": false, 00:10:10.915 "data_offset": 0, 00:10:10.915 "data_size": 0 00:10:10.915 } 00:10:10.915 ] 00:10:10.915 }' 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.915 08:47:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.484 [2024-09-28 08:47:49.302525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.484 [2024-09-28 08:47:49.302627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.484 [2024-09-28 08:47:49.314546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.484 [2024-09-28 08:47:49.316736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.484 [2024-09-28 08:47:49.316778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.484 [2024-09-28 08:47:49.316788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.484 [2024-09-28 08:47:49.316816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.484 [2024-09-28 08:47:49.316823] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.484 [2024-09-28 08:47:49.316831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.484 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.485 "name": "Existed_Raid", 00:10:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.485 "strip_size_kb": 64, 00:10:11.485 "state": "configuring", 00:10:11.485 "raid_level": "raid0", 00:10:11.485 "superblock": false, 00:10:11.485 "num_base_bdevs": 4, 00:10:11.485 "num_base_bdevs_discovered": 1, 00:10:11.485 "num_base_bdevs_operational": 4, 00:10:11.485 "base_bdevs_list": [ 00:10:11.485 { 00:10:11.485 "name": "BaseBdev1", 00:10:11.485 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:11.485 "is_configured": true, 00:10:11.485 "data_offset": 0, 00:10:11.485 "data_size": 65536 00:10:11.485 }, 00:10:11.485 { 00:10:11.485 "name": "BaseBdev2", 00:10:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.485 "is_configured": false, 00:10:11.485 "data_offset": 0, 00:10:11.485 "data_size": 0 00:10:11.485 }, 00:10:11.485 { 00:10:11.485 "name": "BaseBdev3", 00:10:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.485 "is_configured": false, 00:10:11.485 "data_offset": 0, 00:10:11.485 "data_size": 0 00:10:11.485 }, 00:10:11.485 { 00:10:11.485 "name": "BaseBdev4", 00:10:11.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.485 "is_configured": false, 00:10:11.485 "data_offset": 0, 00:10:11.485 "data_size": 0 00:10:11.485 } 00:10:11.485 ] 00:10:11.485 }' 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.485 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.053 [2024-09-28 08:47:49.797291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.053 BaseBdev2 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.053 [ 00:10:12.053 { 00:10:12.053 "name": "BaseBdev2", 00:10:12.053 "aliases": [ 00:10:12.053 "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2" 00:10:12.053 ], 00:10:12.053 "product_name": "Malloc disk", 00:10:12.053 "block_size": 512, 00:10:12.053 "num_blocks": 65536, 00:10:12.053 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:12.053 "assigned_rate_limits": { 00:10:12.053 "rw_ios_per_sec": 0, 00:10:12.053 "rw_mbytes_per_sec": 0, 00:10:12.053 "r_mbytes_per_sec": 0, 00:10:12.053 "w_mbytes_per_sec": 0 00:10:12.053 }, 00:10:12.053 "claimed": true, 00:10:12.053 "claim_type": "exclusive_write", 00:10:12.053 "zoned": false, 00:10:12.053 "supported_io_types": { 00:10:12.053 "read": true, 00:10:12.053 "write": true, 00:10:12.053 "unmap": true, 00:10:12.053 "flush": true, 00:10:12.053 "reset": true, 00:10:12.053 "nvme_admin": false, 00:10:12.053 "nvme_io": false, 00:10:12.053 "nvme_io_md": false, 00:10:12.053 "write_zeroes": true, 00:10:12.053 "zcopy": true, 00:10:12.053 "get_zone_info": false, 00:10:12.053 "zone_management": false, 00:10:12.053 "zone_append": false, 00:10:12.053 "compare": false, 00:10:12.053 "compare_and_write": false, 00:10:12.053 "abort": true, 00:10:12.053 "seek_hole": false, 00:10:12.053 "seek_data": false, 00:10:12.053 "copy": true, 00:10:12.053 "nvme_iov_md": false 00:10:12.053 }, 00:10:12.053 "memory_domains": [ 00:10:12.053 { 00:10:12.053 "dma_device_id": "system", 00:10:12.053 "dma_device_type": 1 00:10:12.053 }, 00:10:12.053 { 00:10:12.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.053 "dma_device_type": 2 00:10:12.053 } 00:10:12.053 ], 00:10:12.053 "driver_specific": {} 00:10:12.053 } 00:10:12.053 ] 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.053 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.054 "name": "Existed_Raid", 00:10:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.054 "strip_size_kb": 64, 00:10:12.054 "state": "configuring", 00:10:12.054 "raid_level": "raid0", 00:10:12.054 "superblock": false, 00:10:12.054 "num_base_bdevs": 4, 00:10:12.054 "num_base_bdevs_discovered": 2, 00:10:12.054 "num_base_bdevs_operational": 4, 00:10:12.054 "base_bdevs_list": [ 00:10:12.054 { 00:10:12.054 "name": "BaseBdev1", 00:10:12.054 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:12.054 "is_configured": true, 00:10:12.054 "data_offset": 0, 00:10:12.054 "data_size": 65536 00:10:12.054 }, 00:10:12.054 { 00:10:12.054 "name": "BaseBdev2", 00:10:12.054 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:12.054 "is_configured": true, 00:10:12.054 "data_offset": 0, 00:10:12.054 "data_size": 65536 00:10:12.054 }, 00:10:12.054 { 00:10:12.054 "name": "BaseBdev3", 00:10:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.054 "is_configured": false, 00:10:12.054 "data_offset": 0, 00:10:12.054 "data_size": 0 00:10:12.054 }, 00:10:12.054 { 00:10:12.054 "name": "BaseBdev4", 00:10:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.054 "is_configured": false, 00:10:12.054 "data_offset": 0, 00:10:12.054 "data_size": 0 00:10:12.054 } 00:10:12.054 ] 00:10:12.054 }' 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.054 08:47:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.313 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.313 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.313 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.313 [2024-09-28 08:47:50.304025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.313 BaseBdev3 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.572 [ 00:10:12.572 { 00:10:12.572 "name": "BaseBdev3", 00:10:12.572 "aliases": [ 00:10:12.572 "70d0d134-2711-41c8-bcfd-91c716326887" 00:10:12.572 ], 00:10:12.572 "product_name": "Malloc disk", 00:10:12.572 "block_size": 512, 00:10:12.572 "num_blocks": 65536, 00:10:12.572 "uuid": "70d0d134-2711-41c8-bcfd-91c716326887", 00:10:12.572 "assigned_rate_limits": { 00:10:12.572 "rw_ios_per_sec": 0, 00:10:12.572 "rw_mbytes_per_sec": 0, 00:10:12.572 "r_mbytes_per_sec": 0, 00:10:12.572 "w_mbytes_per_sec": 0 00:10:12.572 }, 00:10:12.572 "claimed": true, 00:10:12.572 "claim_type": "exclusive_write", 00:10:12.572 "zoned": false, 00:10:12.572 "supported_io_types": { 00:10:12.572 "read": true, 00:10:12.572 "write": true, 00:10:12.572 "unmap": true, 00:10:12.572 "flush": true, 00:10:12.572 "reset": true, 00:10:12.572 "nvme_admin": false, 00:10:12.572 "nvme_io": false, 00:10:12.572 "nvme_io_md": false, 00:10:12.572 "write_zeroes": true, 00:10:12.572 "zcopy": true, 00:10:12.572 "get_zone_info": false, 00:10:12.572 "zone_management": false, 00:10:12.572 "zone_append": false, 00:10:12.572 "compare": false, 00:10:12.572 "compare_and_write": false, 00:10:12.572 "abort": true, 00:10:12.572 "seek_hole": false, 00:10:12.572 "seek_data": false, 00:10:12.572 "copy": true, 00:10:12.572 "nvme_iov_md": false 00:10:12.572 }, 00:10:12.572 "memory_domains": [ 00:10:12.572 { 00:10:12.572 "dma_device_id": "system", 00:10:12.572 "dma_device_type": 1 00:10:12.572 }, 00:10:12.572 { 00:10:12.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.572 "dma_device_type": 2 00:10:12.572 } 00:10:12.572 ], 00:10:12.572 "driver_specific": {} 00:10:12.572 } 00:10:12.572 ] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.572 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.572 "name": "Existed_Raid", 00:10:12.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.572 "strip_size_kb": 64, 00:10:12.572 "state": "configuring", 00:10:12.572 "raid_level": "raid0", 00:10:12.572 "superblock": false, 00:10:12.572 "num_base_bdevs": 4, 00:10:12.572 "num_base_bdevs_discovered": 3, 00:10:12.572 "num_base_bdevs_operational": 4, 00:10:12.572 "base_bdevs_list": [ 00:10:12.572 { 00:10:12.572 "name": "BaseBdev1", 00:10:12.572 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:12.572 "is_configured": true, 00:10:12.572 "data_offset": 0, 00:10:12.572 "data_size": 65536 00:10:12.572 }, 00:10:12.572 { 00:10:12.572 "name": "BaseBdev2", 00:10:12.572 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:12.572 "is_configured": true, 00:10:12.573 "data_offset": 0, 00:10:12.573 "data_size": 65536 00:10:12.573 }, 00:10:12.573 { 00:10:12.573 "name": "BaseBdev3", 00:10:12.573 "uuid": "70d0d134-2711-41c8-bcfd-91c716326887", 00:10:12.573 "is_configured": true, 00:10:12.573 "data_offset": 0, 00:10:12.573 "data_size": 65536 00:10:12.573 }, 00:10:12.573 { 00:10:12.573 "name": "BaseBdev4", 00:10:12.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.573 "is_configured": false, 00:10:12.573 "data_offset": 0, 00:10:12.573 "data_size": 0 00:10:12.573 } 00:10:12.573 ] 00:10:12.573 }' 00:10:12.573 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.573 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.831 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:12.831 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.831 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 [2024-09-28 08:47:50.846263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.118 [2024-09-28 08:47:50.846383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.118 [2024-09-28 08:47:50.846398] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:13.118 [2024-09-28 08:47:50.846761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.118 [2024-09-28 08:47:50.846966] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.118 [2024-09-28 08:47:50.846983] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.118 [2024-09-28 08:47:50.847281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.118 BaseBdev4 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 [ 00:10:13.118 { 00:10:13.118 "name": "BaseBdev4", 00:10:13.118 "aliases": [ 00:10:13.118 "71e2bf70-54f8-4282-a413-9b831ba18aba" 00:10:13.118 ], 00:10:13.118 "product_name": "Malloc disk", 00:10:13.118 "block_size": 512, 00:10:13.118 "num_blocks": 65536, 00:10:13.118 "uuid": "71e2bf70-54f8-4282-a413-9b831ba18aba", 00:10:13.118 "assigned_rate_limits": { 00:10:13.118 "rw_ios_per_sec": 0, 00:10:13.118 "rw_mbytes_per_sec": 0, 00:10:13.118 "r_mbytes_per_sec": 0, 00:10:13.118 "w_mbytes_per_sec": 0 00:10:13.118 }, 00:10:13.118 "claimed": true, 00:10:13.118 "claim_type": "exclusive_write", 00:10:13.118 "zoned": false, 00:10:13.118 "supported_io_types": { 00:10:13.118 "read": true, 00:10:13.118 "write": true, 00:10:13.118 "unmap": true, 00:10:13.118 "flush": true, 00:10:13.118 "reset": true, 00:10:13.118 "nvme_admin": false, 00:10:13.118 "nvme_io": false, 00:10:13.118 "nvme_io_md": false, 00:10:13.118 "write_zeroes": true, 00:10:13.118 "zcopy": true, 00:10:13.118 "get_zone_info": false, 00:10:13.118 "zone_management": false, 00:10:13.118 "zone_append": false, 00:10:13.118 "compare": false, 00:10:13.118 "compare_and_write": false, 00:10:13.118 "abort": true, 00:10:13.118 "seek_hole": false, 00:10:13.118 "seek_data": false, 00:10:13.118 "copy": true, 00:10:13.118 "nvme_iov_md": false 00:10:13.118 }, 00:10:13.118 "memory_domains": [ 00:10:13.118 { 00:10:13.118 "dma_device_id": "system", 00:10:13.118 "dma_device_type": 1 00:10:13.118 }, 00:10:13.118 { 00:10:13.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.118 "dma_device_type": 2 00:10:13.118 } 00:10:13.118 ], 00:10:13.118 "driver_specific": {} 00:10:13.118 } 00:10:13.118 ] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.118 "name": "Existed_Raid", 00:10:13.118 "uuid": "f3db07f9-e5d2-4555-969d-9be8dfcd84a9", 00:10:13.118 "strip_size_kb": 64, 00:10:13.118 "state": "online", 00:10:13.118 "raid_level": "raid0", 00:10:13.118 "superblock": false, 00:10:13.118 "num_base_bdevs": 4, 00:10:13.118 "num_base_bdevs_discovered": 4, 00:10:13.118 "num_base_bdevs_operational": 4, 00:10:13.118 "base_bdevs_list": [ 00:10:13.118 { 00:10:13.118 "name": "BaseBdev1", 00:10:13.118 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:13.118 "is_configured": true, 00:10:13.118 "data_offset": 0, 00:10:13.118 "data_size": 65536 00:10:13.118 }, 00:10:13.118 { 00:10:13.118 "name": "BaseBdev2", 00:10:13.118 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:13.118 "is_configured": true, 00:10:13.118 "data_offset": 0, 00:10:13.118 "data_size": 65536 00:10:13.118 }, 00:10:13.118 { 00:10:13.118 "name": "BaseBdev3", 00:10:13.118 "uuid": "70d0d134-2711-41c8-bcfd-91c716326887", 00:10:13.118 "is_configured": true, 00:10:13.118 "data_offset": 0, 00:10:13.118 "data_size": 65536 00:10:13.118 }, 00:10:13.118 { 00:10:13.118 "name": "BaseBdev4", 00:10:13.118 "uuid": "71e2bf70-54f8-4282-a413-9b831ba18aba", 00:10:13.118 "is_configured": true, 00:10:13.118 "data_offset": 0, 00:10:13.118 "data_size": 65536 00:10:13.118 } 00:10:13.118 ] 00:10:13.118 }' 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.118 08:47:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.377 [2024-09-28 08:47:51.345774] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.377 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.635 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.635 "name": "Existed_Raid", 00:10:13.635 "aliases": [ 00:10:13.635 "f3db07f9-e5d2-4555-969d-9be8dfcd84a9" 00:10:13.635 ], 00:10:13.635 "product_name": "Raid Volume", 00:10:13.635 "block_size": 512, 00:10:13.635 "num_blocks": 262144, 00:10:13.635 "uuid": "f3db07f9-e5d2-4555-969d-9be8dfcd84a9", 00:10:13.635 "assigned_rate_limits": { 00:10:13.635 "rw_ios_per_sec": 0, 00:10:13.635 "rw_mbytes_per_sec": 0, 00:10:13.635 "r_mbytes_per_sec": 0, 00:10:13.635 "w_mbytes_per_sec": 0 00:10:13.635 }, 00:10:13.635 "claimed": false, 00:10:13.635 "zoned": false, 00:10:13.635 "supported_io_types": { 00:10:13.635 "read": true, 00:10:13.635 "write": true, 00:10:13.635 "unmap": true, 00:10:13.635 "flush": true, 00:10:13.635 "reset": true, 00:10:13.635 "nvme_admin": false, 00:10:13.635 "nvme_io": false, 00:10:13.635 "nvme_io_md": false, 00:10:13.635 "write_zeroes": true, 00:10:13.635 "zcopy": false, 00:10:13.635 "get_zone_info": false, 00:10:13.635 "zone_management": false, 00:10:13.635 "zone_append": false, 00:10:13.635 "compare": false, 00:10:13.635 "compare_and_write": false, 00:10:13.635 "abort": false, 00:10:13.635 "seek_hole": false, 00:10:13.635 "seek_data": false, 00:10:13.635 "copy": false, 00:10:13.635 "nvme_iov_md": false 00:10:13.635 }, 00:10:13.635 "memory_domains": [ 00:10:13.635 { 00:10:13.635 "dma_device_id": "system", 00:10:13.635 "dma_device_type": 1 00:10:13.635 }, 00:10:13.635 { 00:10:13.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.635 "dma_device_type": 2 00:10:13.635 }, 00:10:13.635 { 00:10:13.635 "dma_device_id": "system", 00:10:13.635 "dma_device_type": 1 00:10:13.635 }, 00:10:13.635 { 00:10:13.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.635 "dma_device_type": 2 00:10:13.635 }, 00:10:13.635 { 00:10:13.635 "dma_device_id": "system", 00:10:13.635 "dma_device_type": 1 00:10:13.635 }, 00:10:13.635 { 00:10:13.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.636 "dma_device_type": 2 00:10:13.636 }, 00:10:13.636 { 00:10:13.636 "dma_device_id": "system", 00:10:13.636 "dma_device_type": 1 00:10:13.636 }, 00:10:13.636 { 00:10:13.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.636 "dma_device_type": 2 00:10:13.636 } 00:10:13.636 ], 00:10:13.636 "driver_specific": { 00:10:13.636 "raid": { 00:10:13.636 "uuid": "f3db07f9-e5d2-4555-969d-9be8dfcd84a9", 00:10:13.636 "strip_size_kb": 64, 00:10:13.636 "state": "online", 00:10:13.636 "raid_level": "raid0", 00:10:13.636 "superblock": false, 00:10:13.636 "num_base_bdevs": 4, 00:10:13.636 "num_base_bdevs_discovered": 4, 00:10:13.636 "num_base_bdevs_operational": 4, 00:10:13.636 "base_bdevs_list": [ 00:10:13.636 { 00:10:13.636 "name": "BaseBdev1", 00:10:13.636 "uuid": "0b14e67f-dc52-4420-8026-a5773050536b", 00:10:13.636 "is_configured": true, 00:10:13.636 "data_offset": 0, 00:10:13.636 "data_size": 65536 00:10:13.636 }, 00:10:13.636 { 00:10:13.636 "name": "BaseBdev2", 00:10:13.636 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:13.636 "is_configured": true, 00:10:13.636 "data_offset": 0, 00:10:13.636 "data_size": 65536 00:10:13.636 }, 00:10:13.636 { 00:10:13.636 "name": "BaseBdev3", 00:10:13.636 "uuid": "70d0d134-2711-41c8-bcfd-91c716326887", 00:10:13.636 "is_configured": true, 00:10:13.636 "data_offset": 0, 00:10:13.636 "data_size": 65536 00:10:13.636 }, 00:10:13.636 { 00:10:13.636 "name": "BaseBdev4", 00:10:13.636 "uuid": "71e2bf70-54f8-4282-a413-9b831ba18aba", 00:10:13.636 "is_configured": true, 00:10:13.636 "data_offset": 0, 00:10:13.636 "data_size": 65536 00:10:13.636 } 00:10:13.636 ] 00:10:13.636 } 00:10:13.636 } 00:10:13.636 }' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.636 BaseBdev2 00:10:13.636 BaseBdev3 00:10:13.636 BaseBdev4' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.636 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.896 [2024-09-28 08:47:51.644949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.896 [2024-09-28 08:47:51.644982] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.896 [2024-09-28 08:47:51.645040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.896 "name": "Existed_Raid", 00:10:13.896 "uuid": "f3db07f9-e5d2-4555-969d-9be8dfcd84a9", 00:10:13.896 "strip_size_kb": 64, 00:10:13.896 "state": "offline", 00:10:13.896 "raid_level": "raid0", 00:10:13.896 "superblock": false, 00:10:13.896 "num_base_bdevs": 4, 00:10:13.896 "num_base_bdevs_discovered": 3, 00:10:13.896 "num_base_bdevs_operational": 3, 00:10:13.896 "base_bdevs_list": [ 00:10:13.896 { 00:10:13.896 "name": null, 00:10:13.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.896 "is_configured": false, 00:10:13.896 "data_offset": 0, 00:10:13.896 "data_size": 65536 00:10:13.896 }, 00:10:13.896 { 00:10:13.896 "name": "BaseBdev2", 00:10:13.896 "uuid": "8fe78b8f-9a97-4f7f-bf70-0614aa6afbc2", 00:10:13.896 "is_configured": true, 00:10:13.896 "data_offset": 0, 00:10:13.896 "data_size": 65536 00:10:13.896 }, 00:10:13.896 { 00:10:13.896 "name": "BaseBdev3", 00:10:13.896 "uuid": "70d0d134-2711-41c8-bcfd-91c716326887", 00:10:13.896 "is_configured": true, 00:10:13.896 "data_offset": 0, 00:10:13.896 "data_size": 65536 00:10:13.896 }, 00:10:13.896 { 00:10:13.896 "name": "BaseBdev4", 00:10:13.896 "uuid": "71e2bf70-54f8-4282-a413-9b831ba18aba", 00:10:13.896 "is_configured": true, 00:10:13.896 "data_offset": 0, 00:10:13.896 "data_size": 65536 00:10:13.896 } 00:10:13.896 ] 00:10:13.896 }' 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.896 08:47:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 [2024-09-28 08:47:52.236246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.465 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 [2024-09-28 08:47:52.396848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.725 [2024-09-28 08:47:52.555851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:14.725 [2024-09-28 08:47:52.555910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.725 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 BaseBdev2 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 [ 00:10:14.985 { 00:10:14.985 "name": "BaseBdev2", 00:10:14.985 "aliases": [ 00:10:14.985 "6c942f7b-8f1d-44b8-9678-bd304c86f9fe" 00:10:14.985 ], 00:10:14.985 "product_name": "Malloc disk", 00:10:14.985 "block_size": 512, 00:10:14.985 "num_blocks": 65536, 00:10:14.985 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:14.985 "assigned_rate_limits": { 00:10:14.985 "rw_ios_per_sec": 0, 00:10:14.985 "rw_mbytes_per_sec": 0, 00:10:14.985 "r_mbytes_per_sec": 0, 00:10:14.985 "w_mbytes_per_sec": 0 00:10:14.985 }, 00:10:14.985 "claimed": false, 00:10:14.985 "zoned": false, 00:10:14.985 "supported_io_types": { 00:10:14.985 "read": true, 00:10:14.985 "write": true, 00:10:14.985 "unmap": true, 00:10:14.985 "flush": true, 00:10:14.985 "reset": true, 00:10:14.985 "nvme_admin": false, 00:10:14.985 "nvme_io": false, 00:10:14.985 "nvme_io_md": false, 00:10:14.985 "write_zeroes": true, 00:10:14.985 "zcopy": true, 00:10:14.985 "get_zone_info": false, 00:10:14.985 "zone_management": false, 00:10:14.985 "zone_append": false, 00:10:14.985 "compare": false, 00:10:14.985 "compare_and_write": false, 00:10:14.985 "abort": true, 00:10:14.985 "seek_hole": false, 00:10:14.985 "seek_data": false, 00:10:14.985 "copy": true, 00:10:14.985 "nvme_iov_md": false 00:10:14.985 }, 00:10:14.985 "memory_domains": [ 00:10:14.985 { 00:10:14.985 "dma_device_id": "system", 00:10:14.985 "dma_device_type": 1 00:10:14.985 }, 00:10:14.985 { 00:10:14.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.985 "dma_device_type": 2 00:10:14.985 } 00:10:14.985 ], 00:10:14.985 "driver_specific": {} 00:10:14.985 } 00:10:14.985 ] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 BaseBdev3 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.985 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.985 [ 00:10:14.985 { 00:10:14.985 "name": "BaseBdev3", 00:10:14.985 "aliases": [ 00:10:14.985 "ee1ce2fd-4805-4833-9464-05eb9a8393f4" 00:10:14.985 ], 00:10:14.985 "product_name": "Malloc disk", 00:10:14.985 "block_size": 512, 00:10:14.985 "num_blocks": 65536, 00:10:14.985 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:14.985 "assigned_rate_limits": { 00:10:14.985 "rw_ios_per_sec": 0, 00:10:14.985 "rw_mbytes_per_sec": 0, 00:10:14.985 "r_mbytes_per_sec": 0, 00:10:14.985 "w_mbytes_per_sec": 0 00:10:14.985 }, 00:10:14.985 "claimed": false, 00:10:14.985 "zoned": false, 00:10:14.985 "supported_io_types": { 00:10:14.985 "read": true, 00:10:14.985 "write": true, 00:10:14.985 "unmap": true, 00:10:14.985 "flush": true, 00:10:14.985 "reset": true, 00:10:14.985 "nvme_admin": false, 00:10:14.985 "nvme_io": false, 00:10:14.985 "nvme_io_md": false, 00:10:14.985 "write_zeroes": true, 00:10:14.985 "zcopy": true, 00:10:14.985 "get_zone_info": false, 00:10:14.985 "zone_management": false, 00:10:14.985 "zone_append": false, 00:10:14.985 "compare": false, 00:10:14.985 "compare_and_write": false, 00:10:14.985 "abort": true, 00:10:14.985 "seek_hole": false, 00:10:14.985 "seek_data": false, 00:10:14.985 "copy": true, 00:10:14.985 "nvme_iov_md": false 00:10:14.985 }, 00:10:14.985 "memory_domains": [ 00:10:14.985 { 00:10:14.985 "dma_device_id": "system", 00:10:14.986 "dma_device_type": 1 00:10:14.986 }, 00:10:14.986 { 00:10:14.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.986 "dma_device_type": 2 00:10:14.986 } 00:10:14.986 ], 00:10:14.986 "driver_specific": {} 00:10:14.986 } 00:10:14.986 ] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.986 BaseBdev4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.986 [ 00:10:14.986 { 00:10:14.986 "name": "BaseBdev4", 00:10:14.986 "aliases": [ 00:10:14.986 "6f384169-a3d1-426c-847e-d94ced4b7511" 00:10:14.986 ], 00:10:14.986 "product_name": "Malloc disk", 00:10:14.986 "block_size": 512, 00:10:14.986 "num_blocks": 65536, 00:10:14.986 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:14.986 "assigned_rate_limits": { 00:10:14.986 "rw_ios_per_sec": 0, 00:10:14.986 "rw_mbytes_per_sec": 0, 00:10:14.986 "r_mbytes_per_sec": 0, 00:10:14.986 "w_mbytes_per_sec": 0 00:10:14.986 }, 00:10:14.986 "claimed": false, 00:10:14.986 "zoned": false, 00:10:14.986 "supported_io_types": { 00:10:14.986 "read": true, 00:10:14.986 "write": true, 00:10:14.986 "unmap": true, 00:10:14.986 "flush": true, 00:10:14.986 "reset": true, 00:10:14.986 "nvme_admin": false, 00:10:14.986 "nvme_io": false, 00:10:14.986 "nvme_io_md": false, 00:10:14.986 "write_zeroes": true, 00:10:14.986 "zcopy": true, 00:10:14.986 "get_zone_info": false, 00:10:14.986 "zone_management": false, 00:10:14.986 "zone_append": false, 00:10:14.986 "compare": false, 00:10:14.986 "compare_and_write": false, 00:10:14.986 "abort": true, 00:10:14.986 "seek_hole": false, 00:10:14.986 "seek_data": false, 00:10:14.986 "copy": true, 00:10:14.986 "nvme_iov_md": false 00:10:14.986 }, 00:10:14.986 "memory_domains": [ 00:10:14.986 { 00:10:14.986 "dma_device_id": "system", 00:10:14.986 "dma_device_type": 1 00:10:14.986 }, 00:10:14.986 { 00:10:14.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.986 "dma_device_type": 2 00:10:14.986 } 00:10:14.986 ], 00:10:14.986 "driver_specific": {} 00:10:14.986 } 00:10:14.986 ] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.986 [2024-09-28 08:47:52.964104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.986 [2024-09-28 08:47:52.964193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.986 [2024-09-28 08:47:52.964249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.986 [2024-09-28 08:47:52.966324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.986 [2024-09-28 08:47:52.966417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.986 08:47:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.245 08:47:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.245 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.246 "name": "Existed_Raid", 00:10:15.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.246 "strip_size_kb": 64, 00:10:15.246 "state": "configuring", 00:10:15.246 "raid_level": "raid0", 00:10:15.246 "superblock": false, 00:10:15.246 "num_base_bdevs": 4, 00:10:15.246 "num_base_bdevs_discovered": 3, 00:10:15.246 "num_base_bdevs_operational": 4, 00:10:15.246 "base_bdevs_list": [ 00:10:15.246 { 00:10:15.246 "name": "BaseBdev1", 00:10:15.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.246 "is_configured": false, 00:10:15.246 "data_offset": 0, 00:10:15.246 "data_size": 0 00:10:15.246 }, 00:10:15.246 { 00:10:15.246 "name": "BaseBdev2", 00:10:15.246 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:15.246 "is_configured": true, 00:10:15.246 "data_offset": 0, 00:10:15.246 "data_size": 65536 00:10:15.246 }, 00:10:15.246 { 00:10:15.246 "name": "BaseBdev3", 00:10:15.246 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:15.246 "is_configured": true, 00:10:15.246 "data_offset": 0, 00:10:15.246 "data_size": 65536 00:10:15.246 }, 00:10:15.246 { 00:10:15.246 "name": "BaseBdev4", 00:10:15.246 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:15.246 "is_configured": true, 00:10:15.246 "data_offset": 0, 00:10:15.246 "data_size": 65536 00:10:15.246 } 00:10:15.246 ] 00:10:15.246 }' 00:10:15.246 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.246 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.505 [2024-09-28 08:47:53.395343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.505 "name": "Existed_Raid", 00:10:15.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.505 "strip_size_kb": 64, 00:10:15.505 "state": "configuring", 00:10:15.505 "raid_level": "raid0", 00:10:15.505 "superblock": false, 00:10:15.505 "num_base_bdevs": 4, 00:10:15.505 "num_base_bdevs_discovered": 2, 00:10:15.505 "num_base_bdevs_operational": 4, 00:10:15.505 "base_bdevs_list": [ 00:10:15.505 { 00:10:15.505 "name": "BaseBdev1", 00:10:15.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.505 "is_configured": false, 00:10:15.505 "data_offset": 0, 00:10:15.505 "data_size": 0 00:10:15.505 }, 00:10:15.505 { 00:10:15.505 "name": null, 00:10:15.505 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:15.505 "is_configured": false, 00:10:15.505 "data_offset": 0, 00:10:15.505 "data_size": 65536 00:10:15.505 }, 00:10:15.505 { 00:10:15.505 "name": "BaseBdev3", 00:10:15.505 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:15.505 "is_configured": true, 00:10:15.505 "data_offset": 0, 00:10:15.505 "data_size": 65536 00:10:15.505 }, 00:10:15.505 { 00:10:15.505 "name": "BaseBdev4", 00:10:15.505 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:15.505 "is_configured": true, 00:10:15.505 "data_offset": 0, 00:10:15.505 "data_size": 65536 00:10:15.505 } 00:10:15.505 ] 00:10:15.505 }' 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.505 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 [2024-09-28 08:47:53.892074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.074 BaseBdev1 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 [ 00:10:16.074 { 00:10:16.074 "name": "BaseBdev1", 00:10:16.074 "aliases": [ 00:10:16.074 "b704046a-29df-4923-9fc3-a777fea54935" 00:10:16.074 ], 00:10:16.074 "product_name": "Malloc disk", 00:10:16.074 "block_size": 512, 00:10:16.074 "num_blocks": 65536, 00:10:16.074 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:16.074 "assigned_rate_limits": { 00:10:16.074 "rw_ios_per_sec": 0, 00:10:16.074 "rw_mbytes_per_sec": 0, 00:10:16.074 "r_mbytes_per_sec": 0, 00:10:16.074 "w_mbytes_per_sec": 0 00:10:16.074 }, 00:10:16.074 "claimed": true, 00:10:16.074 "claim_type": "exclusive_write", 00:10:16.074 "zoned": false, 00:10:16.074 "supported_io_types": { 00:10:16.074 "read": true, 00:10:16.074 "write": true, 00:10:16.074 "unmap": true, 00:10:16.074 "flush": true, 00:10:16.074 "reset": true, 00:10:16.074 "nvme_admin": false, 00:10:16.074 "nvme_io": false, 00:10:16.074 "nvme_io_md": false, 00:10:16.074 "write_zeroes": true, 00:10:16.074 "zcopy": true, 00:10:16.074 "get_zone_info": false, 00:10:16.074 "zone_management": false, 00:10:16.074 "zone_append": false, 00:10:16.074 "compare": false, 00:10:16.074 "compare_and_write": false, 00:10:16.074 "abort": true, 00:10:16.074 "seek_hole": false, 00:10:16.074 "seek_data": false, 00:10:16.074 "copy": true, 00:10:16.074 "nvme_iov_md": false 00:10:16.074 }, 00:10:16.074 "memory_domains": [ 00:10:16.074 { 00:10:16.074 "dma_device_id": "system", 00:10:16.074 "dma_device_type": 1 00:10:16.074 }, 00:10:16.074 { 00:10:16.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.074 "dma_device_type": 2 00:10:16.074 } 00:10:16.074 ], 00:10:16.074 "driver_specific": {} 00:10:16.074 } 00:10:16.074 ] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.074 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.075 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.075 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.075 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.075 "name": "Existed_Raid", 00:10:16.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.075 "strip_size_kb": 64, 00:10:16.075 "state": "configuring", 00:10:16.075 "raid_level": "raid0", 00:10:16.075 "superblock": false, 00:10:16.075 "num_base_bdevs": 4, 00:10:16.075 "num_base_bdevs_discovered": 3, 00:10:16.075 "num_base_bdevs_operational": 4, 00:10:16.075 "base_bdevs_list": [ 00:10:16.075 { 00:10:16.075 "name": "BaseBdev1", 00:10:16.075 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": null, 00:10:16.075 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:16.075 "is_configured": false, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": "BaseBdev3", 00:10:16.075 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": "BaseBdev4", 00:10:16.075 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 } 00:10:16.075 ] 00:10:16.075 }' 00:10:16.075 08:47:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.075 08:47:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.335 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.335 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.335 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.335 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.594 [2024-09-28 08:47:54.343335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.594 "name": "Existed_Raid", 00:10:16.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.594 "strip_size_kb": 64, 00:10:16.594 "state": "configuring", 00:10:16.594 "raid_level": "raid0", 00:10:16.594 "superblock": false, 00:10:16.594 "num_base_bdevs": 4, 00:10:16.594 "num_base_bdevs_discovered": 2, 00:10:16.594 "num_base_bdevs_operational": 4, 00:10:16.594 "base_bdevs_list": [ 00:10:16.594 { 00:10:16.594 "name": "BaseBdev1", 00:10:16.594 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:16.594 "is_configured": true, 00:10:16.594 "data_offset": 0, 00:10:16.594 "data_size": 65536 00:10:16.594 }, 00:10:16.594 { 00:10:16.594 "name": null, 00:10:16.594 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:16.594 "is_configured": false, 00:10:16.594 "data_offset": 0, 00:10:16.594 "data_size": 65536 00:10:16.594 }, 00:10:16.594 { 00:10:16.594 "name": null, 00:10:16.594 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:16.594 "is_configured": false, 00:10:16.594 "data_offset": 0, 00:10:16.594 "data_size": 65536 00:10:16.594 }, 00:10:16.594 { 00:10:16.594 "name": "BaseBdev4", 00:10:16.594 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:16.594 "is_configured": true, 00:10:16.594 "data_offset": 0, 00:10:16.594 "data_size": 65536 00:10:16.594 } 00:10:16.594 ] 00:10:16.594 }' 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.594 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.854 [2024-09-28 08:47:54.802600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.854 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.113 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.113 "name": "Existed_Raid", 00:10:17.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.113 "strip_size_kb": 64, 00:10:17.113 "state": "configuring", 00:10:17.113 "raid_level": "raid0", 00:10:17.113 "superblock": false, 00:10:17.113 "num_base_bdevs": 4, 00:10:17.113 "num_base_bdevs_discovered": 3, 00:10:17.113 "num_base_bdevs_operational": 4, 00:10:17.113 "base_bdevs_list": [ 00:10:17.113 { 00:10:17.113 "name": "BaseBdev1", 00:10:17.113 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:17.113 "is_configured": true, 00:10:17.113 "data_offset": 0, 00:10:17.113 "data_size": 65536 00:10:17.113 }, 00:10:17.113 { 00:10:17.113 "name": null, 00:10:17.113 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:17.113 "is_configured": false, 00:10:17.113 "data_offset": 0, 00:10:17.113 "data_size": 65536 00:10:17.113 }, 00:10:17.113 { 00:10:17.113 "name": "BaseBdev3", 00:10:17.113 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:17.113 "is_configured": true, 00:10:17.113 "data_offset": 0, 00:10:17.113 "data_size": 65536 00:10:17.113 }, 00:10:17.113 { 00:10:17.113 "name": "BaseBdev4", 00:10:17.113 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:17.113 "is_configured": true, 00:10:17.113 "data_offset": 0, 00:10:17.113 "data_size": 65536 00:10:17.113 } 00:10:17.113 ] 00:10:17.113 }' 00:10:17.113 08:47:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.113 08:47:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.373 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.373 [2024-09-28 08:47:55.273779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.632 "name": "Existed_Raid", 00:10:17.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.632 "strip_size_kb": 64, 00:10:17.632 "state": "configuring", 00:10:17.632 "raid_level": "raid0", 00:10:17.632 "superblock": false, 00:10:17.632 "num_base_bdevs": 4, 00:10:17.632 "num_base_bdevs_discovered": 2, 00:10:17.632 "num_base_bdevs_operational": 4, 00:10:17.632 "base_bdevs_list": [ 00:10:17.632 { 00:10:17.632 "name": null, 00:10:17.632 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:17.632 "is_configured": false, 00:10:17.632 "data_offset": 0, 00:10:17.632 "data_size": 65536 00:10:17.632 }, 00:10:17.632 { 00:10:17.632 "name": null, 00:10:17.632 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:17.632 "is_configured": false, 00:10:17.632 "data_offset": 0, 00:10:17.632 "data_size": 65536 00:10:17.632 }, 00:10:17.632 { 00:10:17.632 "name": "BaseBdev3", 00:10:17.632 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:17.632 "is_configured": true, 00:10:17.632 "data_offset": 0, 00:10:17.632 "data_size": 65536 00:10:17.632 }, 00:10:17.632 { 00:10:17.632 "name": "BaseBdev4", 00:10:17.632 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:17.632 "is_configured": true, 00:10:17.632 "data_offset": 0, 00:10:17.632 "data_size": 65536 00:10:17.632 } 00:10:17.632 ] 00:10:17.632 }' 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.632 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.892 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.892 [2024-09-28 08:47:55.884312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.152 "name": "Existed_Raid", 00:10:18.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.152 "strip_size_kb": 64, 00:10:18.152 "state": "configuring", 00:10:18.152 "raid_level": "raid0", 00:10:18.152 "superblock": false, 00:10:18.152 "num_base_bdevs": 4, 00:10:18.152 "num_base_bdevs_discovered": 3, 00:10:18.152 "num_base_bdevs_operational": 4, 00:10:18.152 "base_bdevs_list": [ 00:10:18.152 { 00:10:18.152 "name": null, 00:10:18.152 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:18.152 "is_configured": false, 00:10:18.152 "data_offset": 0, 00:10:18.152 "data_size": 65536 00:10:18.152 }, 00:10:18.152 { 00:10:18.152 "name": "BaseBdev2", 00:10:18.152 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:18.152 "is_configured": true, 00:10:18.152 "data_offset": 0, 00:10:18.152 "data_size": 65536 00:10:18.152 }, 00:10:18.152 { 00:10:18.152 "name": "BaseBdev3", 00:10:18.152 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:18.152 "is_configured": true, 00:10:18.152 "data_offset": 0, 00:10:18.152 "data_size": 65536 00:10:18.152 }, 00:10:18.152 { 00:10:18.152 "name": "BaseBdev4", 00:10:18.152 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:18.152 "is_configured": true, 00:10:18.152 "data_offset": 0, 00:10:18.152 "data_size": 65536 00:10:18.152 } 00:10:18.152 ] 00:10:18.152 }' 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.152 08:47:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b704046a-29df-4923-9fc3-a777fea54935 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.411 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 [2024-09-28 08:47:56.437493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.671 [2024-09-28 08:47:56.437546] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:18.671 [2024-09-28 08:47:56.437554] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:18.671 [2024-09-28 08:47:56.437864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:18.671 [2024-09-28 08:47:56.438043] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:18.671 [2024-09-28 08:47:56.438061] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:18.671 [2024-09-28 08:47:56.438342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.671 NewBaseBdev 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 [ 00:10:18.671 { 00:10:18.671 "name": "NewBaseBdev", 00:10:18.671 "aliases": [ 00:10:18.671 "b704046a-29df-4923-9fc3-a777fea54935" 00:10:18.671 ], 00:10:18.671 "product_name": "Malloc disk", 00:10:18.671 "block_size": 512, 00:10:18.671 "num_blocks": 65536, 00:10:18.671 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:18.671 "assigned_rate_limits": { 00:10:18.671 "rw_ios_per_sec": 0, 00:10:18.671 "rw_mbytes_per_sec": 0, 00:10:18.671 "r_mbytes_per_sec": 0, 00:10:18.671 "w_mbytes_per_sec": 0 00:10:18.671 }, 00:10:18.671 "claimed": true, 00:10:18.671 "claim_type": "exclusive_write", 00:10:18.671 "zoned": false, 00:10:18.671 "supported_io_types": { 00:10:18.671 "read": true, 00:10:18.671 "write": true, 00:10:18.671 "unmap": true, 00:10:18.671 "flush": true, 00:10:18.671 "reset": true, 00:10:18.671 "nvme_admin": false, 00:10:18.671 "nvme_io": false, 00:10:18.671 "nvme_io_md": false, 00:10:18.671 "write_zeroes": true, 00:10:18.671 "zcopy": true, 00:10:18.671 "get_zone_info": false, 00:10:18.671 "zone_management": false, 00:10:18.671 "zone_append": false, 00:10:18.671 "compare": false, 00:10:18.671 "compare_and_write": false, 00:10:18.671 "abort": true, 00:10:18.671 "seek_hole": false, 00:10:18.671 "seek_data": false, 00:10:18.671 "copy": true, 00:10:18.671 "nvme_iov_md": false 00:10:18.671 }, 00:10:18.671 "memory_domains": [ 00:10:18.671 { 00:10:18.671 "dma_device_id": "system", 00:10:18.671 "dma_device_type": 1 00:10:18.671 }, 00:10:18.671 { 00:10:18.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.671 "dma_device_type": 2 00:10:18.671 } 00:10:18.671 ], 00:10:18.671 "driver_specific": {} 00:10:18.671 } 00:10:18.671 ] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.671 "name": "Existed_Raid", 00:10:18.671 "uuid": "ee7294c5-631d-46e1-bf5f-2ba43a9cd303", 00:10:18.671 "strip_size_kb": 64, 00:10:18.671 "state": "online", 00:10:18.671 "raid_level": "raid0", 00:10:18.671 "superblock": false, 00:10:18.671 "num_base_bdevs": 4, 00:10:18.671 "num_base_bdevs_discovered": 4, 00:10:18.671 "num_base_bdevs_operational": 4, 00:10:18.671 "base_bdevs_list": [ 00:10:18.671 { 00:10:18.671 "name": "NewBaseBdev", 00:10:18.671 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:18.671 "is_configured": true, 00:10:18.671 "data_offset": 0, 00:10:18.671 "data_size": 65536 00:10:18.671 }, 00:10:18.671 { 00:10:18.671 "name": "BaseBdev2", 00:10:18.671 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:18.671 "is_configured": true, 00:10:18.671 "data_offset": 0, 00:10:18.671 "data_size": 65536 00:10:18.671 }, 00:10:18.671 { 00:10:18.671 "name": "BaseBdev3", 00:10:18.671 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:18.671 "is_configured": true, 00:10:18.671 "data_offset": 0, 00:10:18.671 "data_size": 65536 00:10:18.671 }, 00:10:18.671 { 00:10:18.671 "name": "BaseBdev4", 00:10:18.671 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:18.671 "is_configured": true, 00:10:18.671 "data_offset": 0, 00:10:18.671 "data_size": 65536 00:10:18.671 } 00:10:18.671 ] 00:10:18.671 }' 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.671 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.930 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.930 [2024-09-28 08:47:56.917153] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.190 08:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.190 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.190 "name": "Existed_Raid", 00:10:19.190 "aliases": [ 00:10:19.190 "ee7294c5-631d-46e1-bf5f-2ba43a9cd303" 00:10:19.190 ], 00:10:19.190 "product_name": "Raid Volume", 00:10:19.190 "block_size": 512, 00:10:19.190 "num_blocks": 262144, 00:10:19.190 "uuid": "ee7294c5-631d-46e1-bf5f-2ba43a9cd303", 00:10:19.190 "assigned_rate_limits": { 00:10:19.190 "rw_ios_per_sec": 0, 00:10:19.190 "rw_mbytes_per_sec": 0, 00:10:19.190 "r_mbytes_per_sec": 0, 00:10:19.190 "w_mbytes_per_sec": 0 00:10:19.190 }, 00:10:19.190 "claimed": false, 00:10:19.190 "zoned": false, 00:10:19.190 "supported_io_types": { 00:10:19.190 "read": true, 00:10:19.190 "write": true, 00:10:19.190 "unmap": true, 00:10:19.190 "flush": true, 00:10:19.190 "reset": true, 00:10:19.190 "nvme_admin": false, 00:10:19.190 "nvme_io": false, 00:10:19.190 "nvme_io_md": false, 00:10:19.190 "write_zeroes": true, 00:10:19.190 "zcopy": false, 00:10:19.190 "get_zone_info": false, 00:10:19.190 "zone_management": false, 00:10:19.190 "zone_append": false, 00:10:19.190 "compare": false, 00:10:19.190 "compare_and_write": false, 00:10:19.190 "abort": false, 00:10:19.190 "seek_hole": false, 00:10:19.190 "seek_data": false, 00:10:19.190 "copy": false, 00:10:19.190 "nvme_iov_md": false 00:10:19.190 }, 00:10:19.190 "memory_domains": [ 00:10:19.190 { 00:10:19.190 "dma_device_id": "system", 00:10:19.190 "dma_device_type": 1 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.190 "dma_device_type": 2 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "system", 00:10:19.190 "dma_device_type": 1 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.190 "dma_device_type": 2 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "system", 00:10:19.190 "dma_device_type": 1 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.190 "dma_device_type": 2 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "system", 00:10:19.190 "dma_device_type": 1 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.190 "dma_device_type": 2 00:10:19.190 } 00:10:19.190 ], 00:10:19.190 "driver_specific": { 00:10:19.190 "raid": { 00:10:19.190 "uuid": "ee7294c5-631d-46e1-bf5f-2ba43a9cd303", 00:10:19.190 "strip_size_kb": 64, 00:10:19.190 "state": "online", 00:10:19.190 "raid_level": "raid0", 00:10:19.190 "superblock": false, 00:10:19.190 "num_base_bdevs": 4, 00:10:19.190 "num_base_bdevs_discovered": 4, 00:10:19.190 "num_base_bdevs_operational": 4, 00:10:19.190 "base_bdevs_list": [ 00:10:19.190 { 00:10:19.190 "name": "NewBaseBdev", 00:10:19.190 "uuid": "b704046a-29df-4923-9fc3-a777fea54935", 00:10:19.190 "is_configured": true, 00:10:19.190 "data_offset": 0, 00:10:19.190 "data_size": 65536 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "name": "BaseBdev2", 00:10:19.190 "uuid": "6c942f7b-8f1d-44b8-9678-bd304c86f9fe", 00:10:19.190 "is_configured": true, 00:10:19.190 "data_offset": 0, 00:10:19.190 "data_size": 65536 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "name": "BaseBdev3", 00:10:19.190 "uuid": "ee1ce2fd-4805-4833-9464-05eb9a8393f4", 00:10:19.190 "is_configured": true, 00:10:19.190 "data_offset": 0, 00:10:19.190 "data_size": 65536 00:10:19.190 }, 00:10:19.190 { 00:10:19.190 "name": "BaseBdev4", 00:10:19.190 "uuid": "6f384169-a3d1-426c-847e-d94ced4b7511", 00:10:19.190 "is_configured": true, 00:10:19.190 "data_offset": 0, 00:10:19.190 "data_size": 65536 00:10:19.190 } 00:10:19.190 ] 00:10:19.190 } 00:10:19.190 } 00:10:19.190 }' 00:10:19.190 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.190 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.190 BaseBdev2 00:10:19.190 BaseBdev3 00:10:19.190 BaseBdev4' 00:10:19.190 08:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.190 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.191 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.450 [2024-09-28 08:47:57.224167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.450 [2024-09-28 08:47:57.224200] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.450 [2024-09-28 08:47:57.224279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.450 [2024-09-28 08:47:57.224355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.450 [2024-09-28 08:47:57.224365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69389 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69389 ']' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69389 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69389 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69389' 00:10:19.450 killing process with pid 69389 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69389 00:10:19.450 [2024-09-28 08:47:57.273885] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.450 08:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69389 00:10:19.710 [2024-09-28 08:47:57.684146] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.089 ************************************ 00:10:21.089 END TEST raid_state_function_test 00:10:21.089 ************************************ 00:10:21.089 00:10:21.089 real 0m11.697s 00:10:21.089 user 0m18.259s 00:10:21.089 sys 0m2.156s 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.089 08:47:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:21.089 08:47:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.089 08:47:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.089 08:47:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.089 ************************************ 00:10:21.089 START TEST raid_state_function_test_sb 00:10:21.089 ************************************ 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:21.089 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:21.349 Process raid pid: 70066 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70066 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70066' 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70066 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70066 ']' 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.349 08:47:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.349 [2024-09-28 08:47:59.187954] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:21.350 [2024-09-28 08:47:59.188216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.609 [2024-09-28 08:47:59.359290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.920 [2024-09-28 08:47:59.604205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.920 [2024-09-28 08:47:59.837942] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.920 [2024-09-28 08:47:59.838051] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.186 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.186 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:22.186 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.186 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.186 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.187 [2024-09-28 08:48:00.009866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.187 [2024-09-28 08:48:00.009979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.187 [2024-09-28 08:48:00.010027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.187 [2024-09-28 08:48:00.010053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.187 [2024-09-28 08:48:00.010073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.187 [2024-09-28 08:48:00.010097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.187 [2024-09-28 08:48:00.010116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.187 [2024-09-28 08:48:00.010157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.187 "name": "Existed_Raid", 00:10:22.187 "uuid": "66a95711-2dd6-4db4-a7fd-1277776d5ae3", 00:10:22.187 "strip_size_kb": 64, 00:10:22.187 "state": "configuring", 00:10:22.187 "raid_level": "raid0", 00:10:22.187 "superblock": true, 00:10:22.187 "num_base_bdevs": 4, 00:10:22.187 "num_base_bdevs_discovered": 0, 00:10:22.187 "num_base_bdevs_operational": 4, 00:10:22.187 "base_bdevs_list": [ 00:10:22.187 { 00:10:22.187 "name": "BaseBdev1", 00:10:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.187 "is_configured": false, 00:10:22.187 "data_offset": 0, 00:10:22.187 "data_size": 0 00:10:22.187 }, 00:10:22.187 { 00:10:22.187 "name": "BaseBdev2", 00:10:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.187 "is_configured": false, 00:10:22.187 "data_offset": 0, 00:10:22.187 "data_size": 0 00:10:22.187 }, 00:10:22.187 { 00:10:22.187 "name": "BaseBdev3", 00:10:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.187 "is_configured": false, 00:10:22.187 "data_offset": 0, 00:10:22.187 "data_size": 0 00:10:22.187 }, 00:10:22.187 { 00:10:22.187 "name": "BaseBdev4", 00:10:22.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.187 "is_configured": false, 00:10:22.187 "data_offset": 0, 00:10:22.187 "data_size": 0 00:10:22.187 } 00:10:22.187 ] 00:10:22.187 }' 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.187 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.447 [2024-09-28 08:48:00.425044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.447 [2024-09-28 08:48:00.425087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.447 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.447 [2024-09-28 08:48:00.437056] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.447 [2024-09-28 08:48:00.437151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.447 [2024-09-28 08:48:00.437180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.447 [2024-09-28 08:48:00.437205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.447 [2024-09-28 08:48:00.437224] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.447 [2024-09-28 08:48:00.437246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.447 [2024-09-28 08:48:00.437264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.447 [2024-09-28 08:48:00.437307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.707 [2024-09-28 08:48:00.527001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.707 BaseBdev1 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.707 [ 00:10:22.707 { 00:10:22.707 "name": "BaseBdev1", 00:10:22.707 "aliases": [ 00:10:22.707 "d261c611-c723-4503-8545-c2bb6e1446ae" 00:10:22.707 ], 00:10:22.707 "product_name": "Malloc disk", 00:10:22.707 "block_size": 512, 00:10:22.707 "num_blocks": 65536, 00:10:22.707 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:22.707 "assigned_rate_limits": { 00:10:22.707 "rw_ios_per_sec": 0, 00:10:22.707 "rw_mbytes_per_sec": 0, 00:10:22.707 "r_mbytes_per_sec": 0, 00:10:22.707 "w_mbytes_per_sec": 0 00:10:22.707 }, 00:10:22.707 "claimed": true, 00:10:22.707 "claim_type": "exclusive_write", 00:10:22.707 "zoned": false, 00:10:22.707 "supported_io_types": { 00:10:22.707 "read": true, 00:10:22.707 "write": true, 00:10:22.707 "unmap": true, 00:10:22.707 "flush": true, 00:10:22.707 "reset": true, 00:10:22.707 "nvme_admin": false, 00:10:22.707 "nvme_io": false, 00:10:22.707 "nvme_io_md": false, 00:10:22.707 "write_zeroes": true, 00:10:22.707 "zcopy": true, 00:10:22.707 "get_zone_info": false, 00:10:22.707 "zone_management": false, 00:10:22.707 "zone_append": false, 00:10:22.707 "compare": false, 00:10:22.707 "compare_and_write": false, 00:10:22.707 "abort": true, 00:10:22.707 "seek_hole": false, 00:10:22.707 "seek_data": false, 00:10:22.707 "copy": true, 00:10:22.707 "nvme_iov_md": false 00:10:22.707 }, 00:10:22.707 "memory_domains": [ 00:10:22.707 { 00:10:22.707 "dma_device_id": "system", 00:10:22.707 "dma_device_type": 1 00:10:22.707 }, 00:10:22.707 { 00:10:22.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.707 "dma_device_type": 2 00:10:22.707 } 00:10:22.707 ], 00:10:22.707 "driver_specific": {} 00:10:22.707 } 00:10:22.707 ] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.707 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.707 "name": "Existed_Raid", 00:10:22.707 "uuid": "efb88935-af89-444f-809d-54d08ea875e6", 00:10:22.707 "strip_size_kb": 64, 00:10:22.707 "state": "configuring", 00:10:22.707 "raid_level": "raid0", 00:10:22.707 "superblock": true, 00:10:22.707 "num_base_bdevs": 4, 00:10:22.707 "num_base_bdevs_discovered": 1, 00:10:22.707 "num_base_bdevs_operational": 4, 00:10:22.707 "base_bdevs_list": [ 00:10:22.707 { 00:10:22.707 "name": "BaseBdev1", 00:10:22.707 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:22.707 "is_configured": true, 00:10:22.707 "data_offset": 2048, 00:10:22.708 "data_size": 63488 00:10:22.708 }, 00:10:22.708 { 00:10:22.708 "name": "BaseBdev2", 00:10:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.708 "is_configured": false, 00:10:22.708 "data_offset": 0, 00:10:22.708 "data_size": 0 00:10:22.708 }, 00:10:22.708 { 00:10:22.708 "name": "BaseBdev3", 00:10:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.708 "is_configured": false, 00:10:22.708 "data_offset": 0, 00:10:22.708 "data_size": 0 00:10:22.708 }, 00:10:22.708 { 00:10:22.708 "name": "BaseBdev4", 00:10:22.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.708 "is_configured": false, 00:10:22.708 "data_offset": 0, 00:10:22.708 "data_size": 0 00:10:22.708 } 00:10:22.708 ] 00:10:22.708 }' 00:10:22.708 08:48:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.708 08:48:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.277 [2024-09-28 08:48:01.006206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.277 [2024-09-28 08:48:01.006271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.277 [2024-09-28 08:48:01.014249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.277 [2024-09-28 08:48:01.016333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.277 [2024-09-28 08:48:01.016378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.277 [2024-09-28 08:48:01.016388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:23.277 [2024-09-28 08:48:01.016399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:23.277 [2024-09-28 08:48:01.016406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:23.277 [2024-09-28 08:48:01.016414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.277 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.277 "name": "Existed_Raid", 00:10:23.277 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:23.277 "strip_size_kb": 64, 00:10:23.277 "state": "configuring", 00:10:23.277 "raid_level": "raid0", 00:10:23.277 "superblock": true, 00:10:23.277 "num_base_bdevs": 4, 00:10:23.277 "num_base_bdevs_discovered": 1, 00:10:23.277 "num_base_bdevs_operational": 4, 00:10:23.277 "base_bdevs_list": [ 00:10:23.277 { 00:10:23.277 "name": "BaseBdev1", 00:10:23.278 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:23.278 "is_configured": true, 00:10:23.278 "data_offset": 2048, 00:10:23.278 "data_size": 63488 00:10:23.278 }, 00:10:23.278 { 00:10:23.278 "name": "BaseBdev2", 00:10:23.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.278 "is_configured": false, 00:10:23.278 "data_offset": 0, 00:10:23.278 "data_size": 0 00:10:23.278 }, 00:10:23.278 { 00:10:23.278 "name": "BaseBdev3", 00:10:23.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.278 "is_configured": false, 00:10:23.278 "data_offset": 0, 00:10:23.278 "data_size": 0 00:10:23.278 }, 00:10:23.278 { 00:10:23.278 "name": "BaseBdev4", 00:10:23.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.278 "is_configured": false, 00:10:23.278 "data_offset": 0, 00:10:23.278 "data_size": 0 00:10:23.278 } 00:10:23.278 ] 00:10:23.278 }' 00:10:23.278 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.278 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 [2024-09-28 08:48:01.459278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.538 BaseBdev2 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 [ 00:10:23.538 { 00:10:23.538 "name": "BaseBdev2", 00:10:23.538 "aliases": [ 00:10:23.538 "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9" 00:10:23.538 ], 00:10:23.538 "product_name": "Malloc disk", 00:10:23.538 "block_size": 512, 00:10:23.538 "num_blocks": 65536, 00:10:23.538 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:23.538 "assigned_rate_limits": { 00:10:23.538 "rw_ios_per_sec": 0, 00:10:23.538 "rw_mbytes_per_sec": 0, 00:10:23.538 "r_mbytes_per_sec": 0, 00:10:23.538 "w_mbytes_per_sec": 0 00:10:23.538 }, 00:10:23.538 "claimed": true, 00:10:23.538 "claim_type": "exclusive_write", 00:10:23.538 "zoned": false, 00:10:23.538 "supported_io_types": { 00:10:23.538 "read": true, 00:10:23.538 "write": true, 00:10:23.538 "unmap": true, 00:10:23.538 "flush": true, 00:10:23.538 "reset": true, 00:10:23.538 "nvme_admin": false, 00:10:23.538 "nvme_io": false, 00:10:23.538 "nvme_io_md": false, 00:10:23.538 "write_zeroes": true, 00:10:23.538 "zcopy": true, 00:10:23.538 "get_zone_info": false, 00:10:23.538 "zone_management": false, 00:10:23.538 "zone_append": false, 00:10:23.538 "compare": false, 00:10:23.538 "compare_and_write": false, 00:10:23.538 "abort": true, 00:10:23.538 "seek_hole": false, 00:10:23.538 "seek_data": false, 00:10:23.538 "copy": true, 00:10:23.538 "nvme_iov_md": false 00:10:23.538 }, 00:10:23.538 "memory_domains": [ 00:10:23.538 { 00:10:23.538 "dma_device_id": "system", 00:10:23.538 "dma_device_type": 1 00:10:23.538 }, 00:10:23.538 { 00:10:23.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.538 "dma_device_type": 2 00:10:23.538 } 00:10:23.538 ], 00:10:23.538 "driver_specific": {} 00:10:23.538 } 00:10:23.538 ] 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.797 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.797 "name": "Existed_Raid", 00:10:23.797 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:23.797 "strip_size_kb": 64, 00:10:23.797 "state": "configuring", 00:10:23.797 "raid_level": "raid0", 00:10:23.797 "superblock": true, 00:10:23.797 "num_base_bdevs": 4, 00:10:23.797 "num_base_bdevs_discovered": 2, 00:10:23.797 "num_base_bdevs_operational": 4, 00:10:23.797 "base_bdevs_list": [ 00:10:23.797 { 00:10:23.797 "name": "BaseBdev1", 00:10:23.797 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:23.797 "is_configured": true, 00:10:23.797 "data_offset": 2048, 00:10:23.797 "data_size": 63488 00:10:23.797 }, 00:10:23.797 { 00:10:23.797 "name": "BaseBdev2", 00:10:23.797 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:23.797 "is_configured": true, 00:10:23.797 "data_offset": 2048, 00:10:23.797 "data_size": 63488 00:10:23.797 }, 00:10:23.797 { 00:10:23.797 "name": "BaseBdev3", 00:10:23.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.797 "is_configured": false, 00:10:23.797 "data_offset": 0, 00:10:23.797 "data_size": 0 00:10:23.797 }, 00:10:23.797 { 00:10:23.797 "name": "BaseBdev4", 00:10:23.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.797 "is_configured": false, 00:10:23.797 "data_offset": 0, 00:10:23.797 "data_size": 0 00:10:23.797 } 00:10:23.797 ] 00:10:23.797 }' 00:10:23.797 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.797 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.056 [2024-09-28 08:48:01.981335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.056 BaseBdev3 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.056 08:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.056 [ 00:10:24.056 { 00:10:24.056 "name": "BaseBdev3", 00:10:24.056 "aliases": [ 00:10:24.056 "2a7be41c-9538-4b58-8f6e-011937178521" 00:10:24.056 ], 00:10:24.056 "product_name": "Malloc disk", 00:10:24.056 "block_size": 512, 00:10:24.056 "num_blocks": 65536, 00:10:24.056 "uuid": "2a7be41c-9538-4b58-8f6e-011937178521", 00:10:24.056 "assigned_rate_limits": { 00:10:24.056 "rw_ios_per_sec": 0, 00:10:24.056 "rw_mbytes_per_sec": 0, 00:10:24.056 "r_mbytes_per_sec": 0, 00:10:24.056 "w_mbytes_per_sec": 0 00:10:24.056 }, 00:10:24.056 "claimed": true, 00:10:24.056 "claim_type": "exclusive_write", 00:10:24.056 "zoned": false, 00:10:24.056 "supported_io_types": { 00:10:24.056 "read": true, 00:10:24.056 "write": true, 00:10:24.056 "unmap": true, 00:10:24.056 "flush": true, 00:10:24.056 "reset": true, 00:10:24.056 "nvme_admin": false, 00:10:24.056 "nvme_io": false, 00:10:24.056 "nvme_io_md": false, 00:10:24.056 "write_zeroes": true, 00:10:24.056 "zcopy": true, 00:10:24.056 "get_zone_info": false, 00:10:24.056 "zone_management": false, 00:10:24.056 "zone_append": false, 00:10:24.056 "compare": false, 00:10:24.056 "compare_and_write": false, 00:10:24.056 "abort": true, 00:10:24.056 "seek_hole": false, 00:10:24.056 "seek_data": false, 00:10:24.056 "copy": true, 00:10:24.056 "nvme_iov_md": false 00:10:24.056 }, 00:10:24.056 "memory_domains": [ 00:10:24.056 { 00:10:24.056 "dma_device_id": "system", 00:10:24.056 "dma_device_type": 1 00:10:24.056 }, 00:10:24.056 { 00:10:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.056 "dma_device_type": 2 00:10:24.056 } 00:10:24.056 ], 00:10:24.056 "driver_specific": {} 00:10:24.056 } 00:10:24.056 ] 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.056 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.057 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.057 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.057 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.057 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.316 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.316 "name": "Existed_Raid", 00:10:24.316 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:24.316 "strip_size_kb": 64, 00:10:24.316 "state": "configuring", 00:10:24.316 "raid_level": "raid0", 00:10:24.316 "superblock": true, 00:10:24.316 "num_base_bdevs": 4, 00:10:24.316 "num_base_bdevs_discovered": 3, 00:10:24.316 "num_base_bdevs_operational": 4, 00:10:24.316 "base_bdevs_list": [ 00:10:24.316 { 00:10:24.316 "name": "BaseBdev1", 00:10:24.316 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:24.316 "is_configured": true, 00:10:24.316 "data_offset": 2048, 00:10:24.316 "data_size": 63488 00:10:24.316 }, 00:10:24.316 { 00:10:24.316 "name": "BaseBdev2", 00:10:24.316 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:24.316 "is_configured": true, 00:10:24.316 "data_offset": 2048, 00:10:24.316 "data_size": 63488 00:10:24.316 }, 00:10:24.316 { 00:10:24.316 "name": "BaseBdev3", 00:10:24.316 "uuid": "2a7be41c-9538-4b58-8f6e-011937178521", 00:10:24.316 "is_configured": true, 00:10:24.316 "data_offset": 2048, 00:10:24.316 "data_size": 63488 00:10:24.316 }, 00:10:24.316 { 00:10:24.316 "name": "BaseBdev4", 00:10:24.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.316 "is_configured": false, 00:10:24.316 "data_offset": 0, 00:10:24.316 "data_size": 0 00:10:24.316 } 00:10:24.316 ] 00:10:24.316 }' 00:10:24.316 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.316 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.576 [2024-09-28 08:48:02.507295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.576 [2024-09-28 08:48:02.507624] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.576 [2024-09-28 08:48:02.507665] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.576 [2024-09-28 08:48:02.507979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.576 [2024-09-28 08:48:02.508158] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.576 [2024-09-28 08:48:02.508177] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.576 [2024-09-28 08:48:02.508344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.576 BaseBdev4 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.576 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.576 [ 00:10:24.576 { 00:10:24.576 "name": "BaseBdev4", 00:10:24.576 "aliases": [ 00:10:24.576 "e39ac9aa-5274-4cd0-aa64-e217a5757530" 00:10:24.576 ], 00:10:24.577 "product_name": "Malloc disk", 00:10:24.577 "block_size": 512, 00:10:24.577 "num_blocks": 65536, 00:10:24.577 "uuid": "e39ac9aa-5274-4cd0-aa64-e217a5757530", 00:10:24.577 "assigned_rate_limits": { 00:10:24.577 "rw_ios_per_sec": 0, 00:10:24.577 "rw_mbytes_per_sec": 0, 00:10:24.577 "r_mbytes_per_sec": 0, 00:10:24.577 "w_mbytes_per_sec": 0 00:10:24.577 }, 00:10:24.577 "claimed": true, 00:10:24.577 "claim_type": "exclusive_write", 00:10:24.577 "zoned": false, 00:10:24.577 "supported_io_types": { 00:10:24.577 "read": true, 00:10:24.577 "write": true, 00:10:24.577 "unmap": true, 00:10:24.577 "flush": true, 00:10:24.577 "reset": true, 00:10:24.577 "nvme_admin": false, 00:10:24.577 "nvme_io": false, 00:10:24.577 "nvme_io_md": false, 00:10:24.577 "write_zeroes": true, 00:10:24.577 "zcopy": true, 00:10:24.577 "get_zone_info": false, 00:10:24.577 "zone_management": false, 00:10:24.577 "zone_append": false, 00:10:24.577 "compare": false, 00:10:24.577 "compare_and_write": false, 00:10:24.577 "abort": true, 00:10:24.577 "seek_hole": false, 00:10:24.577 "seek_data": false, 00:10:24.577 "copy": true, 00:10:24.577 "nvme_iov_md": false 00:10:24.577 }, 00:10:24.577 "memory_domains": [ 00:10:24.577 { 00:10:24.577 "dma_device_id": "system", 00:10:24.577 "dma_device_type": 1 00:10:24.577 }, 00:10:24.577 { 00:10:24.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.577 "dma_device_type": 2 00:10:24.577 } 00:10:24.577 ], 00:10:24.577 "driver_specific": {} 00:10:24.577 } 00:10:24.577 ] 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.577 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.837 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.837 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.837 "name": "Existed_Raid", 00:10:24.837 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:24.837 "strip_size_kb": 64, 00:10:24.837 "state": "online", 00:10:24.837 "raid_level": "raid0", 00:10:24.837 "superblock": true, 00:10:24.837 "num_base_bdevs": 4, 00:10:24.837 "num_base_bdevs_discovered": 4, 00:10:24.837 "num_base_bdevs_operational": 4, 00:10:24.837 "base_bdevs_list": [ 00:10:24.837 { 00:10:24.837 "name": "BaseBdev1", 00:10:24.837 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:24.837 "is_configured": true, 00:10:24.837 "data_offset": 2048, 00:10:24.837 "data_size": 63488 00:10:24.837 }, 00:10:24.837 { 00:10:24.837 "name": "BaseBdev2", 00:10:24.837 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:24.837 "is_configured": true, 00:10:24.837 "data_offset": 2048, 00:10:24.837 "data_size": 63488 00:10:24.837 }, 00:10:24.837 { 00:10:24.837 "name": "BaseBdev3", 00:10:24.837 "uuid": "2a7be41c-9538-4b58-8f6e-011937178521", 00:10:24.837 "is_configured": true, 00:10:24.837 "data_offset": 2048, 00:10:24.837 "data_size": 63488 00:10:24.837 }, 00:10:24.837 { 00:10:24.837 "name": "BaseBdev4", 00:10:24.837 "uuid": "e39ac9aa-5274-4cd0-aa64-e217a5757530", 00:10:24.837 "is_configured": true, 00:10:24.837 "data_offset": 2048, 00:10:24.837 "data_size": 63488 00:10:24.837 } 00:10:24.837 ] 00:10:24.837 }' 00:10:24.837 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.837 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.097 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.098 [2024-09-28 08:48:02.934903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.098 "name": "Existed_Raid", 00:10:25.098 "aliases": [ 00:10:25.098 "afee8ec1-ecf0-478b-b6a3-b01439035a7a" 00:10:25.098 ], 00:10:25.098 "product_name": "Raid Volume", 00:10:25.098 "block_size": 512, 00:10:25.098 "num_blocks": 253952, 00:10:25.098 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:25.098 "assigned_rate_limits": { 00:10:25.098 "rw_ios_per_sec": 0, 00:10:25.098 "rw_mbytes_per_sec": 0, 00:10:25.098 "r_mbytes_per_sec": 0, 00:10:25.098 "w_mbytes_per_sec": 0 00:10:25.098 }, 00:10:25.098 "claimed": false, 00:10:25.098 "zoned": false, 00:10:25.098 "supported_io_types": { 00:10:25.098 "read": true, 00:10:25.098 "write": true, 00:10:25.098 "unmap": true, 00:10:25.098 "flush": true, 00:10:25.098 "reset": true, 00:10:25.098 "nvme_admin": false, 00:10:25.098 "nvme_io": false, 00:10:25.098 "nvme_io_md": false, 00:10:25.098 "write_zeroes": true, 00:10:25.098 "zcopy": false, 00:10:25.098 "get_zone_info": false, 00:10:25.098 "zone_management": false, 00:10:25.098 "zone_append": false, 00:10:25.098 "compare": false, 00:10:25.098 "compare_and_write": false, 00:10:25.098 "abort": false, 00:10:25.098 "seek_hole": false, 00:10:25.098 "seek_data": false, 00:10:25.098 "copy": false, 00:10:25.098 "nvme_iov_md": false 00:10:25.098 }, 00:10:25.098 "memory_domains": [ 00:10:25.098 { 00:10:25.098 "dma_device_id": "system", 00:10:25.098 "dma_device_type": 1 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.098 "dma_device_type": 2 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "system", 00:10:25.098 "dma_device_type": 1 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.098 "dma_device_type": 2 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "system", 00:10:25.098 "dma_device_type": 1 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.098 "dma_device_type": 2 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "system", 00:10:25.098 "dma_device_type": 1 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.098 "dma_device_type": 2 00:10:25.098 } 00:10:25.098 ], 00:10:25.098 "driver_specific": { 00:10:25.098 "raid": { 00:10:25.098 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:25.098 "strip_size_kb": 64, 00:10:25.098 "state": "online", 00:10:25.098 "raid_level": "raid0", 00:10:25.098 "superblock": true, 00:10:25.098 "num_base_bdevs": 4, 00:10:25.098 "num_base_bdevs_discovered": 4, 00:10:25.098 "num_base_bdevs_operational": 4, 00:10:25.098 "base_bdevs_list": [ 00:10:25.098 { 00:10:25.098 "name": "BaseBdev1", 00:10:25.098 "uuid": "d261c611-c723-4503-8545-c2bb6e1446ae", 00:10:25.098 "is_configured": true, 00:10:25.098 "data_offset": 2048, 00:10:25.098 "data_size": 63488 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "name": "BaseBdev2", 00:10:25.098 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:25.098 "is_configured": true, 00:10:25.098 "data_offset": 2048, 00:10:25.098 "data_size": 63488 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "name": "BaseBdev3", 00:10:25.098 "uuid": "2a7be41c-9538-4b58-8f6e-011937178521", 00:10:25.098 "is_configured": true, 00:10:25.098 "data_offset": 2048, 00:10:25.098 "data_size": 63488 00:10:25.098 }, 00:10:25.098 { 00:10:25.098 "name": "BaseBdev4", 00:10:25.098 "uuid": "e39ac9aa-5274-4cd0-aa64-e217a5757530", 00:10:25.098 "is_configured": true, 00:10:25.098 "data_offset": 2048, 00:10:25.098 "data_size": 63488 00:10:25.098 } 00:10:25.098 ] 00:10:25.098 } 00:10:25.098 } 00:10:25.098 }' 00:10:25.098 08:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:25.098 BaseBdev2 00:10:25.098 BaseBdev3 00:10:25.098 BaseBdev4' 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.098 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.359 [2024-09-28 08:48:03.206152] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.359 [2024-09-28 08:48:03.206184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.359 [2024-09-28 08:48:03.206239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.359 "name": "Existed_Raid", 00:10:25.359 "uuid": "afee8ec1-ecf0-478b-b6a3-b01439035a7a", 00:10:25.359 "strip_size_kb": 64, 00:10:25.359 "state": "offline", 00:10:25.359 "raid_level": "raid0", 00:10:25.359 "superblock": true, 00:10:25.359 "num_base_bdevs": 4, 00:10:25.359 "num_base_bdevs_discovered": 3, 00:10:25.359 "num_base_bdevs_operational": 3, 00:10:25.359 "base_bdevs_list": [ 00:10:25.359 { 00:10:25.359 "name": null, 00:10:25.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.359 "is_configured": false, 00:10:25.359 "data_offset": 0, 00:10:25.359 "data_size": 63488 00:10:25.359 }, 00:10:25.359 { 00:10:25.359 "name": "BaseBdev2", 00:10:25.359 "uuid": "7901b6d7-d1d2-4c1d-a258-3d698fc83bc9", 00:10:25.359 "is_configured": true, 00:10:25.359 "data_offset": 2048, 00:10:25.359 "data_size": 63488 00:10:25.359 }, 00:10:25.359 { 00:10:25.359 "name": "BaseBdev3", 00:10:25.359 "uuid": "2a7be41c-9538-4b58-8f6e-011937178521", 00:10:25.359 "is_configured": true, 00:10:25.359 "data_offset": 2048, 00:10:25.359 "data_size": 63488 00:10:25.359 }, 00:10:25.359 { 00:10:25.359 "name": "BaseBdev4", 00:10:25.359 "uuid": "e39ac9aa-5274-4cd0-aa64-e217a5757530", 00:10:25.359 "is_configured": true, 00:10:25.359 "data_offset": 2048, 00:10:25.359 "data_size": 63488 00:10:25.359 } 00:10:25.359 ] 00:10:25.359 }' 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.359 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.929 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 [2024-09-28 08:48:03.706517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.930 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.930 [2024-09-28 08:48:03.864030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:26.190 08:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.190 [2024-09-28 08:48:04.026628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:26.190 [2024-09-28 08:48:04.026698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.190 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 BaseBdev2 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 [ 00:10:26.451 { 00:10:26.451 "name": "BaseBdev2", 00:10:26.451 "aliases": [ 00:10:26.451 "1980f466-dde9-4115-a6d9-28447013bdc1" 00:10:26.451 ], 00:10:26.451 "product_name": "Malloc disk", 00:10:26.451 "block_size": 512, 00:10:26.451 "num_blocks": 65536, 00:10:26.451 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:26.451 "assigned_rate_limits": { 00:10:26.451 "rw_ios_per_sec": 0, 00:10:26.451 "rw_mbytes_per_sec": 0, 00:10:26.451 "r_mbytes_per_sec": 0, 00:10:26.451 "w_mbytes_per_sec": 0 00:10:26.451 }, 00:10:26.451 "claimed": false, 00:10:26.451 "zoned": false, 00:10:26.451 "supported_io_types": { 00:10:26.451 "read": true, 00:10:26.451 "write": true, 00:10:26.451 "unmap": true, 00:10:26.451 "flush": true, 00:10:26.451 "reset": true, 00:10:26.451 "nvme_admin": false, 00:10:26.451 "nvme_io": false, 00:10:26.451 "nvme_io_md": false, 00:10:26.451 "write_zeroes": true, 00:10:26.451 "zcopy": true, 00:10:26.451 "get_zone_info": false, 00:10:26.451 "zone_management": false, 00:10:26.451 "zone_append": false, 00:10:26.451 "compare": false, 00:10:26.451 "compare_and_write": false, 00:10:26.451 "abort": true, 00:10:26.451 "seek_hole": false, 00:10:26.451 "seek_data": false, 00:10:26.451 "copy": true, 00:10:26.451 "nvme_iov_md": false 00:10:26.451 }, 00:10:26.451 "memory_domains": [ 00:10:26.451 { 00:10:26.451 "dma_device_id": "system", 00:10:26.451 "dma_device_type": 1 00:10:26.451 }, 00:10:26.451 { 00:10:26.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.451 "dma_device_type": 2 00:10:26.451 } 00:10:26.451 ], 00:10:26.451 "driver_specific": {} 00:10:26.451 } 00:10:26.451 ] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 BaseBdev3 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.451 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 [ 00:10:26.451 { 00:10:26.451 "name": "BaseBdev3", 00:10:26.451 "aliases": [ 00:10:26.451 "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe" 00:10:26.451 ], 00:10:26.451 "product_name": "Malloc disk", 00:10:26.451 "block_size": 512, 00:10:26.452 "num_blocks": 65536, 00:10:26.452 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:26.452 "assigned_rate_limits": { 00:10:26.452 "rw_ios_per_sec": 0, 00:10:26.452 "rw_mbytes_per_sec": 0, 00:10:26.452 "r_mbytes_per_sec": 0, 00:10:26.452 "w_mbytes_per_sec": 0 00:10:26.452 }, 00:10:26.452 "claimed": false, 00:10:26.452 "zoned": false, 00:10:26.452 "supported_io_types": { 00:10:26.452 "read": true, 00:10:26.452 "write": true, 00:10:26.452 "unmap": true, 00:10:26.452 "flush": true, 00:10:26.452 "reset": true, 00:10:26.452 "nvme_admin": false, 00:10:26.452 "nvme_io": false, 00:10:26.452 "nvme_io_md": false, 00:10:26.452 "write_zeroes": true, 00:10:26.452 "zcopy": true, 00:10:26.452 "get_zone_info": false, 00:10:26.452 "zone_management": false, 00:10:26.452 "zone_append": false, 00:10:26.452 "compare": false, 00:10:26.452 "compare_and_write": false, 00:10:26.452 "abort": true, 00:10:26.452 "seek_hole": false, 00:10:26.452 "seek_data": false, 00:10:26.452 "copy": true, 00:10:26.452 "nvme_iov_md": false 00:10:26.452 }, 00:10:26.452 "memory_domains": [ 00:10:26.452 { 00:10:26.452 "dma_device_id": "system", 00:10:26.452 "dma_device_type": 1 00:10:26.452 }, 00:10:26.452 { 00:10:26.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.452 "dma_device_type": 2 00:10:26.452 } 00:10:26.452 ], 00:10:26.452 "driver_specific": {} 00:10:26.452 } 00:10:26.452 ] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.452 BaseBdev4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.452 [ 00:10:26.452 { 00:10:26.452 "name": "BaseBdev4", 00:10:26.452 "aliases": [ 00:10:26.452 "707652ad-f20b-445f-931f-3b4ff0fb23d7" 00:10:26.452 ], 00:10:26.452 "product_name": "Malloc disk", 00:10:26.452 "block_size": 512, 00:10:26.452 "num_blocks": 65536, 00:10:26.452 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:26.452 "assigned_rate_limits": { 00:10:26.452 "rw_ios_per_sec": 0, 00:10:26.452 "rw_mbytes_per_sec": 0, 00:10:26.452 "r_mbytes_per_sec": 0, 00:10:26.452 "w_mbytes_per_sec": 0 00:10:26.452 }, 00:10:26.452 "claimed": false, 00:10:26.452 "zoned": false, 00:10:26.452 "supported_io_types": { 00:10:26.452 "read": true, 00:10:26.452 "write": true, 00:10:26.452 "unmap": true, 00:10:26.452 "flush": true, 00:10:26.452 "reset": true, 00:10:26.452 "nvme_admin": false, 00:10:26.452 "nvme_io": false, 00:10:26.452 "nvme_io_md": false, 00:10:26.452 "write_zeroes": true, 00:10:26.452 "zcopy": true, 00:10:26.452 "get_zone_info": false, 00:10:26.452 "zone_management": false, 00:10:26.452 "zone_append": false, 00:10:26.452 "compare": false, 00:10:26.452 "compare_and_write": false, 00:10:26.452 "abort": true, 00:10:26.452 "seek_hole": false, 00:10:26.452 "seek_data": false, 00:10:26.452 "copy": true, 00:10:26.452 "nvme_iov_md": false 00:10:26.452 }, 00:10:26.452 "memory_domains": [ 00:10:26.452 { 00:10:26.452 "dma_device_id": "system", 00:10:26.452 "dma_device_type": 1 00:10:26.452 }, 00:10:26.452 { 00:10:26.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.452 "dma_device_type": 2 00:10:26.452 } 00:10:26.452 ], 00:10:26.452 "driver_specific": {} 00:10:26.452 } 00:10:26.452 ] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.452 [2024-09-28 08:48:04.416858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.452 [2024-09-28 08:48:04.416908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.452 [2024-09-28 08:48:04.416930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.452 [2024-09-28 08:48:04.419033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.452 [2024-09-28 08:48:04.419088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.452 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.712 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.712 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.712 "name": "Existed_Raid", 00:10:26.712 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:26.712 "strip_size_kb": 64, 00:10:26.712 "state": "configuring", 00:10:26.712 "raid_level": "raid0", 00:10:26.712 "superblock": true, 00:10:26.712 "num_base_bdevs": 4, 00:10:26.712 "num_base_bdevs_discovered": 3, 00:10:26.712 "num_base_bdevs_operational": 4, 00:10:26.712 "base_bdevs_list": [ 00:10:26.712 { 00:10:26.712 "name": "BaseBdev1", 00:10:26.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.712 "is_configured": false, 00:10:26.712 "data_offset": 0, 00:10:26.712 "data_size": 0 00:10:26.712 }, 00:10:26.712 { 00:10:26.712 "name": "BaseBdev2", 00:10:26.712 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:26.712 "is_configured": true, 00:10:26.712 "data_offset": 2048, 00:10:26.712 "data_size": 63488 00:10:26.712 }, 00:10:26.712 { 00:10:26.712 "name": "BaseBdev3", 00:10:26.712 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:26.712 "is_configured": true, 00:10:26.712 "data_offset": 2048, 00:10:26.712 "data_size": 63488 00:10:26.712 }, 00:10:26.712 { 00:10:26.712 "name": "BaseBdev4", 00:10:26.712 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:26.712 "is_configured": true, 00:10:26.712 "data_offset": 2048, 00:10:26.712 "data_size": 63488 00:10:26.712 } 00:10:26.713 ] 00:10:26.713 }' 00:10:26.713 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.713 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 [2024-09-28 08:48:04.812163] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.972 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.972 "name": "Existed_Raid", 00:10:26.972 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:26.972 "strip_size_kb": 64, 00:10:26.972 "state": "configuring", 00:10:26.972 "raid_level": "raid0", 00:10:26.972 "superblock": true, 00:10:26.972 "num_base_bdevs": 4, 00:10:26.972 "num_base_bdevs_discovered": 2, 00:10:26.972 "num_base_bdevs_operational": 4, 00:10:26.972 "base_bdevs_list": [ 00:10:26.972 { 00:10:26.972 "name": "BaseBdev1", 00:10:26.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.972 "is_configured": false, 00:10:26.972 "data_offset": 0, 00:10:26.972 "data_size": 0 00:10:26.972 }, 00:10:26.972 { 00:10:26.973 "name": null, 00:10:26.973 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:26.973 "is_configured": false, 00:10:26.973 "data_offset": 0, 00:10:26.973 "data_size": 63488 00:10:26.973 }, 00:10:26.973 { 00:10:26.973 "name": "BaseBdev3", 00:10:26.973 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:26.973 "is_configured": true, 00:10:26.973 "data_offset": 2048, 00:10:26.973 "data_size": 63488 00:10:26.973 }, 00:10:26.973 { 00:10:26.973 "name": "BaseBdev4", 00:10:26.973 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:26.973 "is_configured": true, 00:10:26.973 "data_offset": 2048, 00:10:26.973 "data_size": 63488 00:10:26.973 } 00:10:26.973 ] 00:10:26.973 }' 00:10:26.973 08:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.973 08:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:27.232 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.232 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.232 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 [2024-09-28 08:48:05.296767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.494 BaseBdev1 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.494 [ 00:10:27.494 { 00:10:27.494 "name": "BaseBdev1", 00:10:27.494 "aliases": [ 00:10:27.494 "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e" 00:10:27.494 ], 00:10:27.494 "product_name": "Malloc disk", 00:10:27.494 "block_size": 512, 00:10:27.494 "num_blocks": 65536, 00:10:27.494 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:27.494 "assigned_rate_limits": { 00:10:27.494 "rw_ios_per_sec": 0, 00:10:27.494 "rw_mbytes_per_sec": 0, 00:10:27.494 "r_mbytes_per_sec": 0, 00:10:27.494 "w_mbytes_per_sec": 0 00:10:27.494 }, 00:10:27.494 "claimed": true, 00:10:27.494 "claim_type": "exclusive_write", 00:10:27.494 "zoned": false, 00:10:27.494 "supported_io_types": { 00:10:27.494 "read": true, 00:10:27.494 "write": true, 00:10:27.494 "unmap": true, 00:10:27.494 "flush": true, 00:10:27.494 "reset": true, 00:10:27.494 "nvme_admin": false, 00:10:27.494 "nvme_io": false, 00:10:27.494 "nvme_io_md": false, 00:10:27.494 "write_zeroes": true, 00:10:27.494 "zcopy": true, 00:10:27.494 "get_zone_info": false, 00:10:27.494 "zone_management": false, 00:10:27.494 "zone_append": false, 00:10:27.494 "compare": false, 00:10:27.494 "compare_and_write": false, 00:10:27.494 "abort": true, 00:10:27.494 "seek_hole": false, 00:10:27.494 "seek_data": false, 00:10:27.494 "copy": true, 00:10:27.494 "nvme_iov_md": false 00:10:27.494 }, 00:10:27.494 "memory_domains": [ 00:10:27.494 { 00:10:27.494 "dma_device_id": "system", 00:10:27.494 "dma_device_type": 1 00:10:27.494 }, 00:10:27.494 { 00:10:27.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.494 "dma_device_type": 2 00:10:27.494 } 00:10:27.494 ], 00:10:27.494 "driver_specific": {} 00:10:27.494 } 00:10:27.494 ] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.494 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.495 "name": "Existed_Raid", 00:10:27.495 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:27.495 "strip_size_kb": 64, 00:10:27.495 "state": "configuring", 00:10:27.495 "raid_level": "raid0", 00:10:27.495 "superblock": true, 00:10:27.495 "num_base_bdevs": 4, 00:10:27.495 "num_base_bdevs_discovered": 3, 00:10:27.495 "num_base_bdevs_operational": 4, 00:10:27.495 "base_bdevs_list": [ 00:10:27.495 { 00:10:27.495 "name": "BaseBdev1", 00:10:27.495 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:27.495 "is_configured": true, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 }, 00:10:27.495 { 00:10:27.495 "name": null, 00:10:27.495 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:27.495 "is_configured": false, 00:10:27.495 "data_offset": 0, 00:10:27.495 "data_size": 63488 00:10:27.495 }, 00:10:27.495 { 00:10:27.495 "name": "BaseBdev3", 00:10:27.495 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:27.495 "is_configured": true, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 }, 00:10:27.495 { 00:10:27.495 "name": "BaseBdev4", 00:10:27.495 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:27.495 "is_configured": true, 00:10:27.495 "data_offset": 2048, 00:10:27.495 "data_size": 63488 00:10:27.495 } 00:10:27.495 ] 00:10:27.495 }' 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.495 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.064 [2024-09-28 08:48:05.803944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.064 "name": "Existed_Raid", 00:10:28.064 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:28.064 "strip_size_kb": 64, 00:10:28.064 "state": "configuring", 00:10:28.064 "raid_level": "raid0", 00:10:28.064 "superblock": true, 00:10:28.064 "num_base_bdevs": 4, 00:10:28.064 "num_base_bdevs_discovered": 2, 00:10:28.064 "num_base_bdevs_operational": 4, 00:10:28.064 "base_bdevs_list": [ 00:10:28.064 { 00:10:28.064 "name": "BaseBdev1", 00:10:28.064 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:28.064 "is_configured": true, 00:10:28.064 "data_offset": 2048, 00:10:28.064 "data_size": 63488 00:10:28.064 }, 00:10:28.064 { 00:10:28.064 "name": null, 00:10:28.064 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:28.064 "is_configured": false, 00:10:28.064 "data_offset": 0, 00:10:28.064 "data_size": 63488 00:10:28.064 }, 00:10:28.064 { 00:10:28.064 "name": null, 00:10:28.064 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:28.064 "is_configured": false, 00:10:28.064 "data_offset": 0, 00:10:28.064 "data_size": 63488 00:10:28.064 }, 00:10:28.064 { 00:10:28.064 "name": "BaseBdev4", 00:10:28.064 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:28.064 "is_configured": true, 00:10:28.064 "data_offset": 2048, 00:10:28.064 "data_size": 63488 00:10:28.064 } 00:10:28.064 ] 00:10:28.064 }' 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.064 08:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 [2024-09-28 08:48:06.267183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.324 "name": "Existed_Raid", 00:10:28.324 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:28.324 "strip_size_kb": 64, 00:10:28.324 "state": "configuring", 00:10:28.324 "raid_level": "raid0", 00:10:28.324 "superblock": true, 00:10:28.324 "num_base_bdevs": 4, 00:10:28.324 "num_base_bdevs_discovered": 3, 00:10:28.324 "num_base_bdevs_operational": 4, 00:10:28.324 "base_bdevs_list": [ 00:10:28.324 { 00:10:28.324 "name": "BaseBdev1", 00:10:28.324 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:28.324 "is_configured": true, 00:10:28.324 "data_offset": 2048, 00:10:28.324 "data_size": 63488 00:10:28.324 }, 00:10:28.324 { 00:10:28.324 "name": null, 00:10:28.324 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:28.324 "is_configured": false, 00:10:28.324 "data_offset": 0, 00:10:28.324 "data_size": 63488 00:10:28.324 }, 00:10:28.324 { 00:10:28.324 "name": "BaseBdev3", 00:10:28.324 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:28.324 "is_configured": true, 00:10:28.324 "data_offset": 2048, 00:10:28.324 "data_size": 63488 00:10:28.324 }, 00:10:28.324 { 00:10:28.324 "name": "BaseBdev4", 00:10:28.324 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:28.324 "is_configured": true, 00:10:28.324 "data_offset": 2048, 00:10:28.324 "data_size": 63488 00:10:28.324 } 00:10:28.324 ] 00:10:28.324 }' 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.324 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.894 [2024-09-28 08:48:06.638548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.894 "name": "Existed_Raid", 00:10:28.894 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:28.894 "strip_size_kb": 64, 00:10:28.894 "state": "configuring", 00:10:28.894 "raid_level": "raid0", 00:10:28.894 "superblock": true, 00:10:28.894 "num_base_bdevs": 4, 00:10:28.894 "num_base_bdevs_discovered": 2, 00:10:28.894 "num_base_bdevs_operational": 4, 00:10:28.894 "base_bdevs_list": [ 00:10:28.894 { 00:10:28.894 "name": null, 00:10:28.894 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:28.894 "is_configured": false, 00:10:28.894 "data_offset": 0, 00:10:28.894 "data_size": 63488 00:10:28.894 }, 00:10:28.894 { 00:10:28.894 "name": null, 00:10:28.894 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:28.894 "is_configured": false, 00:10:28.894 "data_offset": 0, 00:10:28.894 "data_size": 63488 00:10:28.894 }, 00:10:28.894 { 00:10:28.894 "name": "BaseBdev3", 00:10:28.894 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:28.894 "is_configured": true, 00:10:28.894 "data_offset": 2048, 00:10:28.894 "data_size": 63488 00:10:28.894 }, 00:10:28.894 { 00:10:28.894 "name": "BaseBdev4", 00:10:28.894 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:28.894 "is_configured": true, 00:10:28.894 "data_offset": 2048, 00:10:28.894 "data_size": 63488 00:10:28.894 } 00:10:28.894 ] 00:10:28.894 }' 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.894 08:48:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.464 [2024-09-28 08:48:07.196264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.464 "name": "Existed_Raid", 00:10:29.464 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:29.464 "strip_size_kb": 64, 00:10:29.464 "state": "configuring", 00:10:29.464 "raid_level": "raid0", 00:10:29.464 "superblock": true, 00:10:29.464 "num_base_bdevs": 4, 00:10:29.464 "num_base_bdevs_discovered": 3, 00:10:29.464 "num_base_bdevs_operational": 4, 00:10:29.464 "base_bdevs_list": [ 00:10:29.464 { 00:10:29.464 "name": null, 00:10:29.464 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:29.464 "is_configured": false, 00:10:29.464 "data_offset": 0, 00:10:29.464 "data_size": 63488 00:10:29.464 }, 00:10:29.464 { 00:10:29.464 "name": "BaseBdev2", 00:10:29.464 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:29.464 "is_configured": true, 00:10:29.464 "data_offset": 2048, 00:10:29.464 "data_size": 63488 00:10:29.464 }, 00:10:29.464 { 00:10:29.464 "name": "BaseBdev3", 00:10:29.464 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:29.464 "is_configured": true, 00:10:29.464 "data_offset": 2048, 00:10:29.464 "data_size": 63488 00:10:29.464 }, 00:10:29.464 { 00:10:29.464 "name": "BaseBdev4", 00:10:29.464 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:29.464 "is_configured": true, 00:10:29.464 "data_offset": 2048, 00:10:29.464 "data_size": 63488 00:10:29.464 } 00:10:29.464 ] 00:10:29.464 }' 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.464 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ac323dae-aa86-49d9-bdeb-74dcf9f9a43e 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 [2024-09-28 08:48:07.694606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:29.724 [2024-09-28 08:48:07.694891] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.724 [2024-09-28 08:48:07.694912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.724 [2024-09-28 08:48:07.695247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:29.724 [2024-09-28 08:48:07.695410] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.724 [2024-09-28 08:48:07.695423] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:29.724 NewBaseBdev 00:10:29.724 [2024-09-28 08:48:07.695576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.724 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.984 [ 00:10:29.984 { 00:10:29.984 "name": "NewBaseBdev", 00:10:29.984 "aliases": [ 00:10:29.984 "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e" 00:10:29.984 ], 00:10:29.984 "product_name": "Malloc disk", 00:10:29.984 "block_size": 512, 00:10:29.984 "num_blocks": 65536, 00:10:29.984 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:29.984 "assigned_rate_limits": { 00:10:29.984 "rw_ios_per_sec": 0, 00:10:29.984 "rw_mbytes_per_sec": 0, 00:10:29.984 "r_mbytes_per_sec": 0, 00:10:29.984 "w_mbytes_per_sec": 0 00:10:29.984 }, 00:10:29.984 "claimed": true, 00:10:29.984 "claim_type": "exclusive_write", 00:10:29.984 "zoned": false, 00:10:29.984 "supported_io_types": { 00:10:29.984 "read": true, 00:10:29.984 "write": true, 00:10:29.984 "unmap": true, 00:10:29.984 "flush": true, 00:10:29.984 "reset": true, 00:10:29.984 "nvme_admin": false, 00:10:29.984 "nvme_io": false, 00:10:29.984 "nvme_io_md": false, 00:10:29.984 "write_zeroes": true, 00:10:29.984 "zcopy": true, 00:10:29.984 "get_zone_info": false, 00:10:29.984 "zone_management": false, 00:10:29.984 "zone_append": false, 00:10:29.984 "compare": false, 00:10:29.984 "compare_and_write": false, 00:10:29.984 "abort": true, 00:10:29.984 "seek_hole": false, 00:10:29.984 "seek_data": false, 00:10:29.984 "copy": true, 00:10:29.984 "nvme_iov_md": false 00:10:29.984 }, 00:10:29.984 "memory_domains": [ 00:10:29.984 { 00:10:29.984 "dma_device_id": "system", 00:10:29.984 "dma_device_type": 1 00:10:29.984 }, 00:10:29.984 { 00:10:29.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.984 "dma_device_type": 2 00:10:29.984 } 00:10:29.984 ], 00:10:29.984 "driver_specific": {} 00:10:29.984 } 00:10:29.984 ] 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.984 "name": "Existed_Raid", 00:10:29.984 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:29.984 "strip_size_kb": 64, 00:10:29.984 "state": "online", 00:10:29.984 "raid_level": "raid0", 00:10:29.984 "superblock": true, 00:10:29.984 "num_base_bdevs": 4, 00:10:29.984 "num_base_bdevs_discovered": 4, 00:10:29.984 "num_base_bdevs_operational": 4, 00:10:29.984 "base_bdevs_list": [ 00:10:29.984 { 00:10:29.984 "name": "NewBaseBdev", 00:10:29.984 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:29.984 "is_configured": true, 00:10:29.984 "data_offset": 2048, 00:10:29.984 "data_size": 63488 00:10:29.984 }, 00:10:29.984 { 00:10:29.984 "name": "BaseBdev2", 00:10:29.984 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:29.984 "is_configured": true, 00:10:29.984 "data_offset": 2048, 00:10:29.984 "data_size": 63488 00:10:29.984 }, 00:10:29.984 { 00:10:29.984 "name": "BaseBdev3", 00:10:29.984 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:29.984 "is_configured": true, 00:10:29.984 "data_offset": 2048, 00:10:29.984 "data_size": 63488 00:10:29.984 }, 00:10:29.984 { 00:10:29.984 "name": "BaseBdev4", 00:10:29.984 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:29.984 "is_configured": true, 00:10:29.984 "data_offset": 2048, 00:10:29.984 "data_size": 63488 00:10:29.984 } 00:10:29.984 ] 00:10:29.984 }' 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.984 08:48:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.244 [2024-09-28 08:48:08.166197] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.244 "name": "Existed_Raid", 00:10:30.244 "aliases": [ 00:10:30.244 "cff2dc6b-7a80-45c2-ab17-a916dc173d44" 00:10:30.244 ], 00:10:30.244 "product_name": "Raid Volume", 00:10:30.244 "block_size": 512, 00:10:30.244 "num_blocks": 253952, 00:10:30.244 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:30.244 "assigned_rate_limits": { 00:10:30.244 "rw_ios_per_sec": 0, 00:10:30.244 "rw_mbytes_per_sec": 0, 00:10:30.244 "r_mbytes_per_sec": 0, 00:10:30.244 "w_mbytes_per_sec": 0 00:10:30.244 }, 00:10:30.244 "claimed": false, 00:10:30.244 "zoned": false, 00:10:30.244 "supported_io_types": { 00:10:30.244 "read": true, 00:10:30.244 "write": true, 00:10:30.244 "unmap": true, 00:10:30.244 "flush": true, 00:10:30.244 "reset": true, 00:10:30.244 "nvme_admin": false, 00:10:30.244 "nvme_io": false, 00:10:30.244 "nvme_io_md": false, 00:10:30.244 "write_zeroes": true, 00:10:30.244 "zcopy": false, 00:10:30.244 "get_zone_info": false, 00:10:30.244 "zone_management": false, 00:10:30.244 "zone_append": false, 00:10:30.244 "compare": false, 00:10:30.244 "compare_and_write": false, 00:10:30.244 "abort": false, 00:10:30.244 "seek_hole": false, 00:10:30.244 "seek_data": false, 00:10:30.244 "copy": false, 00:10:30.244 "nvme_iov_md": false 00:10:30.244 }, 00:10:30.244 "memory_domains": [ 00:10:30.244 { 00:10:30.244 "dma_device_id": "system", 00:10:30.244 "dma_device_type": 1 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.244 "dma_device_type": 2 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "system", 00:10:30.244 "dma_device_type": 1 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.244 "dma_device_type": 2 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "system", 00:10:30.244 "dma_device_type": 1 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.244 "dma_device_type": 2 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "system", 00:10:30.244 "dma_device_type": 1 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.244 "dma_device_type": 2 00:10:30.244 } 00:10:30.244 ], 00:10:30.244 "driver_specific": { 00:10:30.244 "raid": { 00:10:30.244 "uuid": "cff2dc6b-7a80-45c2-ab17-a916dc173d44", 00:10:30.244 "strip_size_kb": 64, 00:10:30.244 "state": "online", 00:10:30.244 "raid_level": "raid0", 00:10:30.244 "superblock": true, 00:10:30.244 "num_base_bdevs": 4, 00:10:30.244 "num_base_bdevs_discovered": 4, 00:10:30.244 "num_base_bdevs_operational": 4, 00:10:30.244 "base_bdevs_list": [ 00:10:30.244 { 00:10:30.244 "name": "NewBaseBdev", 00:10:30.244 "uuid": "ac323dae-aa86-49d9-bdeb-74dcf9f9a43e", 00:10:30.244 "is_configured": true, 00:10:30.244 "data_offset": 2048, 00:10:30.244 "data_size": 63488 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "name": "BaseBdev2", 00:10:30.244 "uuid": "1980f466-dde9-4115-a6d9-28447013bdc1", 00:10:30.244 "is_configured": true, 00:10:30.244 "data_offset": 2048, 00:10:30.244 "data_size": 63488 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "name": "BaseBdev3", 00:10:30.244 "uuid": "74e57ad7-a491-40da-9d3c-dd3a9f1b51fe", 00:10:30.244 "is_configured": true, 00:10:30.244 "data_offset": 2048, 00:10:30.244 "data_size": 63488 00:10:30.244 }, 00:10:30.244 { 00:10:30.244 "name": "BaseBdev4", 00:10:30.244 "uuid": "707652ad-f20b-445f-931f-3b4ff0fb23d7", 00:10:30.244 "is_configured": true, 00:10:30.244 "data_offset": 2048, 00:10:30.244 "data_size": 63488 00:10:30.244 } 00:10:30.244 ] 00:10:30.244 } 00:10:30.244 } 00:10:30.244 }' 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:30.244 BaseBdev2 00:10:30.244 BaseBdev3 00:10:30.244 BaseBdev4' 00:10:30.244 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.504 [2024-09-28 08:48:08.393440] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.504 [2024-09-28 08:48:08.393509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.504 [2024-09-28 08:48:08.393593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.504 [2024-09-28 08:48:08.393679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.504 [2024-09-28 08:48:08.393690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70066 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70066 ']' 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70066 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:30.504 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70066 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70066' 00:10:30.505 killing process with pid 70066 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70066 00:10:30.505 [2024-09-28 08:48:08.442102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.505 08:48:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70066 00:10:31.074 [2024-09-28 08:48:08.864486] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.454 08:48:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:32.454 00:10:32.454 real 0m11.101s 00:10:32.454 user 0m17.120s 00:10:32.454 sys 0m1.999s 00:10:32.454 08:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.454 08:48:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.454 ************************************ 00:10:32.454 END TEST raid_state_function_test_sb 00:10:32.454 ************************************ 00:10:32.454 08:48:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:32.454 08:48:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:32.454 08:48:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.454 08:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.454 ************************************ 00:10:32.454 START TEST raid_superblock_test 00:10:32.454 ************************************ 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:32.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70731 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70731 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70731 ']' 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.454 08:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.454 [2024-09-28 08:48:10.338313] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:32.454 [2024-09-28 08:48:10.338568] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70731 ] 00:10:32.714 [2024-09-28 08:48:10.506856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.974 [2024-09-28 08:48:10.750438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.234 [2024-09-28 08:48:10.984742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.234 [2024-09-28 08:48:10.984843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.234 malloc1 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.234 [2024-09-28 08:48:11.200128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.234 [2024-09-28 08:48:11.200239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.234 [2024-09-28 08:48:11.200284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:33.234 [2024-09-28 08:48:11.200325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.234 [2024-09-28 08:48:11.202717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.234 [2024-09-28 08:48:11.202782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.234 pt1 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.234 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.494 malloc2 00:10:33.494 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.494 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.494 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.494 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.494 [2024-09-28 08:48:11.269416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.494 [2024-09-28 08:48:11.269524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.494 [2024-09-28 08:48:11.269565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:33.495 [2024-09-28 08:48:11.269611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.495 [2024-09-28 08:48:11.271995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.495 [2024-09-28 08:48:11.272061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.495 pt2 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 malloc3 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 [2024-09-28 08:48:11.333942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.495 [2024-09-28 08:48:11.334027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.495 [2024-09-28 08:48:11.334067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:33.495 [2024-09-28 08:48:11.334078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.495 [2024-09-28 08:48:11.336403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.495 [2024-09-28 08:48:11.336438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.495 pt3 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 malloc4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 [2024-09-28 08:48:11.393906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:33.495 [2024-09-28 08:48:11.394009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.495 [2024-09-28 08:48:11.394045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:33.495 [2024-09-28 08:48:11.394073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.495 [2024-09-28 08:48:11.396362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.495 [2024-09-28 08:48:11.396430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:33.495 pt4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 [2024-09-28 08:48:11.405954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.495 [2024-09-28 08:48:11.408035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.495 [2024-09-28 08:48:11.408150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.495 [2024-09-28 08:48:11.408233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:33.495 [2024-09-28 08:48:11.408457] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:33.495 [2024-09-28 08:48:11.408508] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.495 [2024-09-28 08:48:11.408790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:33.495 [2024-09-28 08:48:11.408979] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:33.495 [2024-09-28 08:48:11.409027] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:33.495 [2024-09-28 08:48:11.409215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.495 "name": "raid_bdev1", 00:10:33.495 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:33.495 "strip_size_kb": 64, 00:10:33.495 "state": "online", 00:10:33.495 "raid_level": "raid0", 00:10:33.495 "superblock": true, 00:10:33.495 "num_base_bdevs": 4, 00:10:33.495 "num_base_bdevs_discovered": 4, 00:10:33.495 "num_base_bdevs_operational": 4, 00:10:33.495 "base_bdevs_list": [ 00:10:33.495 { 00:10:33.495 "name": "pt1", 00:10:33.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.495 "is_configured": true, 00:10:33.495 "data_offset": 2048, 00:10:33.495 "data_size": 63488 00:10:33.495 }, 00:10:33.495 { 00:10:33.495 "name": "pt2", 00:10:33.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.495 "is_configured": true, 00:10:33.495 "data_offset": 2048, 00:10:33.495 "data_size": 63488 00:10:33.495 }, 00:10:33.495 { 00:10:33.495 "name": "pt3", 00:10:33.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.495 "is_configured": true, 00:10:33.495 "data_offset": 2048, 00:10:33.495 "data_size": 63488 00:10:33.495 }, 00:10:33.495 { 00:10:33.495 "name": "pt4", 00:10:33.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.495 "is_configured": true, 00:10:33.495 "data_offset": 2048, 00:10:33.495 "data_size": 63488 00:10:33.495 } 00:10:33.495 ] 00:10:33.495 }' 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.495 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.065 [2024-09-28 08:48:11.781577] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.065 "name": "raid_bdev1", 00:10:34.065 "aliases": [ 00:10:34.065 "1475e82a-d850-413b-833f-e83277c911fb" 00:10:34.065 ], 00:10:34.065 "product_name": "Raid Volume", 00:10:34.065 "block_size": 512, 00:10:34.065 "num_blocks": 253952, 00:10:34.065 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:34.065 "assigned_rate_limits": { 00:10:34.065 "rw_ios_per_sec": 0, 00:10:34.065 "rw_mbytes_per_sec": 0, 00:10:34.065 "r_mbytes_per_sec": 0, 00:10:34.065 "w_mbytes_per_sec": 0 00:10:34.065 }, 00:10:34.065 "claimed": false, 00:10:34.065 "zoned": false, 00:10:34.065 "supported_io_types": { 00:10:34.065 "read": true, 00:10:34.065 "write": true, 00:10:34.065 "unmap": true, 00:10:34.065 "flush": true, 00:10:34.065 "reset": true, 00:10:34.065 "nvme_admin": false, 00:10:34.065 "nvme_io": false, 00:10:34.065 "nvme_io_md": false, 00:10:34.065 "write_zeroes": true, 00:10:34.065 "zcopy": false, 00:10:34.065 "get_zone_info": false, 00:10:34.065 "zone_management": false, 00:10:34.065 "zone_append": false, 00:10:34.065 "compare": false, 00:10:34.065 "compare_and_write": false, 00:10:34.065 "abort": false, 00:10:34.065 "seek_hole": false, 00:10:34.065 "seek_data": false, 00:10:34.065 "copy": false, 00:10:34.065 "nvme_iov_md": false 00:10:34.065 }, 00:10:34.065 "memory_domains": [ 00:10:34.065 { 00:10:34.065 "dma_device_id": "system", 00:10:34.065 "dma_device_type": 1 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.065 "dma_device_type": 2 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "system", 00:10:34.065 "dma_device_type": 1 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.065 "dma_device_type": 2 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "system", 00:10:34.065 "dma_device_type": 1 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.065 "dma_device_type": 2 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "system", 00:10:34.065 "dma_device_type": 1 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.065 "dma_device_type": 2 00:10:34.065 } 00:10:34.065 ], 00:10:34.065 "driver_specific": { 00:10:34.065 "raid": { 00:10:34.065 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:34.065 "strip_size_kb": 64, 00:10:34.065 "state": "online", 00:10:34.065 "raid_level": "raid0", 00:10:34.065 "superblock": true, 00:10:34.065 "num_base_bdevs": 4, 00:10:34.065 "num_base_bdevs_discovered": 4, 00:10:34.065 "num_base_bdevs_operational": 4, 00:10:34.065 "base_bdevs_list": [ 00:10:34.065 { 00:10:34.065 "name": "pt1", 00:10:34.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.065 "is_configured": true, 00:10:34.065 "data_offset": 2048, 00:10:34.065 "data_size": 63488 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "name": "pt2", 00:10:34.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.065 "is_configured": true, 00:10:34.065 "data_offset": 2048, 00:10:34.065 "data_size": 63488 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "name": "pt3", 00:10:34.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.065 "is_configured": true, 00:10:34.065 "data_offset": 2048, 00:10:34.065 "data_size": 63488 00:10:34.065 }, 00:10:34.065 { 00:10:34.065 "name": "pt4", 00:10:34.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.065 "is_configured": true, 00:10:34.065 "data_offset": 2048, 00:10:34.065 "data_size": 63488 00:10:34.065 } 00:10:34.065 ] 00:10:34.065 } 00:10:34.065 } 00:10:34.065 }' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.065 pt2 00:10:34.065 pt3 00:10:34.065 pt4' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.065 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.066 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.066 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.066 08:48:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.066 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.066 08:48:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:34.066 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.066 [2024-09-28 08:48:12.057056] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1475e82a-d850-413b-833f-e83277c911fb 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1475e82a-d850-413b-833f-e83277c911fb ']' 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 [2024-09-28 08:48:12.104725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.326 [2024-09-28 08:48:12.104755] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.326 [2024-09-28 08:48:12.104844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.326 [2024-09-28 08:48:12.104932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.326 [2024-09-28 08:48:12.104947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.326 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.327 [2024-09-28 08:48:12.264430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:34.327 [2024-09-28 08:48:12.266598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:34.327 [2024-09-28 08:48:12.266704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:34.327 [2024-09-28 08:48:12.266770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:34.327 [2024-09-28 08:48:12.266854] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:34.327 [2024-09-28 08:48:12.266944] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:34.327 [2024-09-28 08:48:12.267001] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:34.327 [2024-09-28 08:48:12.267056] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:34.327 [2024-09-28 08:48:12.267110] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.327 [2024-09-28 08:48:12.267140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:34.327 request: 00:10:34.327 { 00:10:34.327 "name": "raid_bdev1", 00:10:34.327 "raid_level": "raid0", 00:10:34.327 "base_bdevs": [ 00:10:34.327 "malloc1", 00:10:34.327 "malloc2", 00:10:34.327 "malloc3", 00:10:34.327 "malloc4" 00:10:34.327 ], 00:10:34.327 "strip_size_kb": 64, 00:10:34.327 "superblock": false, 00:10:34.327 "method": "bdev_raid_create", 00:10:34.327 "req_id": 1 00:10:34.327 } 00:10:34.327 Got JSON-RPC error response 00:10:34.327 response: 00:10:34.327 { 00:10:34.327 "code": -17, 00:10:34.327 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:34.327 } 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.327 [2024-09-28 08:48:12.308334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.327 [2024-09-28 08:48:12.308381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.327 [2024-09-28 08:48:12.308397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.327 [2024-09-28 08:48:12.308409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.327 [2024-09-28 08:48:12.310864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.327 [2024-09-28 08:48:12.310900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.327 [2024-09-28 08:48:12.310970] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:34.327 [2024-09-28 08:48:12.311033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.327 pt1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.327 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.587 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.587 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.587 "name": "raid_bdev1", 00:10:34.587 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:34.587 "strip_size_kb": 64, 00:10:34.587 "state": "configuring", 00:10:34.587 "raid_level": "raid0", 00:10:34.587 "superblock": true, 00:10:34.587 "num_base_bdevs": 4, 00:10:34.587 "num_base_bdevs_discovered": 1, 00:10:34.587 "num_base_bdevs_operational": 4, 00:10:34.587 "base_bdevs_list": [ 00:10:34.587 { 00:10:34.587 "name": "pt1", 00:10:34.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.587 "is_configured": true, 00:10:34.587 "data_offset": 2048, 00:10:34.587 "data_size": 63488 00:10:34.587 }, 00:10:34.587 { 00:10:34.587 "name": null, 00:10:34.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.587 "is_configured": false, 00:10:34.587 "data_offset": 2048, 00:10:34.587 "data_size": 63488 00:10:34.587 }, 00:10:34.587 { 00:10:34.587 "name": null, 00:10:34.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.587 "is_configured": false, 00:10:34.587 "data_offset": 2048, 00:10:34.587 "data_size": 63488 00:10:34.587 }, 00:10:34.587 { 00:10:34.587 "name": null, 00:10:34.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.587 "is_configured": false, 00:10:34.587 "data_offset": 2048, 00:10:34.587 "data_size": 63488 00:10:34.587 } 00:10:34.587 ] 00:10:34.587 }' 00:10:34.587 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.587 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.847 [2024-09-28 08:48:12.687706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.847 [2024-09-28 08:48:12.687805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.847 [2024-09-28 08:48:12.687839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:34.847 [2024-09-28 08:48:12.687870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.847 [2024-09-28 08:48:12.688343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.847 [2024-09-28 08:48:12.688400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.847 [2024-09-28 08:48:12.688510] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:34.847 [2024-09-28 08:48:12.688563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.847 pt2 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.847 [2024-09-28 08:48:12.695704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.847 "name": "raid_bdev1", 00:10:34.847 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:34.847 "strip_size_kb": 64, 00:10:34.847 "state": "configuring", 00:10:34.847 "raid_level": "raid0", 00:10:34.847 "superblock": true, 00:10:34.847 "num_base_bdevs": 4, 00:10:34.847 "num_base_bdevs_discovered": 1, 00:10:34.847 "num_base_bdevs_operational": 4, 00:10:34.847 "base_bdevs_list": [ 00:10:34.847 { 00:10:34.847 "name": "pt1", 00:10:34.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.847 "is_configured": true, 00:10:34.847 "data_offset": 2048, 00:10:34.847 "data_size": 63488 00:10:34.847 }, 00:10:34.847 { 00:10:34.847 "name": null, 00:10:34.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.847 "is_configured": false, 00:10:34.847 "data_offset": 0, 00:10:34.847 "data_size": 63488 00:10:34.847 }, 00:10:34.847 { 00:10:34.847 "name": null, 00:10:34.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.847 "is_configured": false, 00:10:34.847 "data_offset": 2048, 00:10:34.847 "data_size": 63488 00:10:34.847 }, 00:10:34.847 { 00:10:34.847 "name": null, 00:10:34.847 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.847 "is_configured": false, 00:10:34.847 "data_offset": 2048, 00:10:34.847 "data_size": 63488 00:10:34.847 } 00:10:34.847 ] 00:10:34.847 }' 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.847 08:48:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 [2024-09-28 08:48:13.075068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:35.126 [2024-09-28 08:48:13.075177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.126 [2024-09-28 08:48:13.075202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:35.126 [2024-09-28 08:48:13.075212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.126 [2024-09-28 08:48:13.075723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.126 [2024-09-28 08:48:13.075741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:35.126 [2024-09-28 08:48:13.075830] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:35.126 [2024-09-28 08:48:13.075861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:35.126 pt2 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 [2024-09-28 08:48:13.087029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:35.126 [2024-09-28 08:48:13.087075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.126 [2024-09-28 08:48:13.087122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:35.126 [2024-09-28 08:48:13.087132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.126 [2024-09-28 08:48:13.087506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.126 [2024-09-28 08:48:13.087525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:35.126 [2024-09-28 08:48:13.087585] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:35.126 [2024-09-28 08:48:13.087607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:35.126 pt3 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.126 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.126 [2024-09-28 08:48:13.098986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:35.126 [2024-09-28 08:48:13.099036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.126 [2024-09-28 08:48:13.099055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:35.126 [2024-09-28 08:48:13.099064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.126 [2024-09-28 08:48:13.099490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.126 [2024-09-28 08:48:13.099513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:35.126 [2024-09-28 08:48:13.099579] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:35.126 [2024-09-28 08:48:13.099606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:35.126 [2024-09-28 08:48:13.099776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.126 [2024-09-28 08:48:13.099787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:35.126 [2024-09-28 08:48:13.100072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:35.126 [2024-09-28 08:48:13.100255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.126 [2024-09-28 08:48:13.100270] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:35.127 [2024-09-28 08:48:13.100415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.127 pt4 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.127 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.403 "name": "raid_bdev1", 00:10:35.403 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:35.403 "strip_size_kb": 64, 00:10:35.403 "state": "online", 00:10:35.403 "raid_level": "raid0", 00:10:35.403 "superblock": true, 00:10:35.403 "num_base_bdevs": 4, 00:10:35.403 "num_base_bdevs_discovered": 4, 00:10:35.403 "num_base_bdevs_operational": 4, 00:10:35.403 "base_bdevs_list": [ 00:10:35.403 { 00:10:35.403 "name": "pt1", 00:10:35.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.403 "is_configured": true, 00:10:35.403 "data_offset": 2048, 00:10:35.403 "data_size": 63488 00:10:35.403 }, 00:10:35.403 { 00:10:35.403 "name": "pt2", 00:10:35.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.403 "is_configured": true, 00:10:35.403 "data_offset": 2048, 00:10:35.403 "data_size": 63488 00:10:35.403 }, 00:10:35.403 { 00:10:35.403 "name": "pt3", 00:10:35.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.403 "is_configured": true, 00:10:35.403 "data_offset": 2048, 00:10:35.403 "data_size": 63488 00:10:35.403 }, 00:10:35.403 { 00:10:35.403 "name": "pt4", 00:10:35.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:35.403 "is_configured": true, 00:10:35.403 "data_offset": 2048, 00:10:35.403 "data_size": 63488 00:10:35.403 } 00:10:35.403 ] 00:10:35.403 }' 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.403 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.663 [2024-09-28 08:48:13.478736] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.663 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.663 "name": "raid_bdev1", 00:10:35.663 "aliases": [ 00:10:35.663 "1475e82a-d850-413b-833f-e83277c911fb" 00:10:35.663 ], 00:10:35.663 "product_name": "Raid Volume", 00:10:35.663 "block_size": 512, 00:10:35.663 "num_blocks": 253952, 00:10:35.663 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:35.663 "assigned_rate_limits": { 00:10:35.663 "rw_ios_per_sec": 0, 00:10:35.663 "rw_mbytes_per_sec": 0, 00:10:35.663 "r_mbytes_per_sec": 0, 00:10:35.663 "w_mbytes_per_sec": 0 00:10:35.663 }, 00:10:35.663 "claimed": false, 00:10:35.663 "zoned": false, 00:10:35.663 "supported_io_types": { 00:10:35.663 "read": true, 00:10:35.663 "write": true, 00:10:35.663 "unmap": true, 00:10:35.663 "flush": true, 00:10:35.663 "reset": true, 00:10:35.663 "nvme_admin": false, 00:10:35.663 "nvme_io": false, 00:10:35.663 "nvme_io_md": false, 00:10:35.664 "write_zeroes": true, 00:10:35.664 "zcopy": false, 00:10:35.664 "get_zone_info": false, 00:10:35.664 "zone_management": false, 00:10:35.664 "zone_append": false, 00:10:35.664 "compare": false, 00:10:35.664 "compare_and_write": false, 00:10:35.664 "abort": false, 00:10:35.664 "seek_hole": false, 00:10:35.664 "seek_data": false, 00:10:35.664 "copy": false, 00:10:35.664 "nvme_iov_md": false 00:10:35.664 }, 00:10:35.664 "memory_domains": [ 00:10:35.664 { 00:10:35.664 "dma_device_id": "system", 00:10:35.664 "dma_device_type": 1 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.664 "dma_device_type": 2 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "system", 00:10:35.664 "dma_device_type": 1 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.664 "dma_device_type": 2 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "system", 00:10:35.664 "dma_device_type": 1 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.664 "dma_device_type": 2 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "system", 00:10:35.664 "dma_device_type": 1 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.664 "dma_device_type": 2 00:10:35.664 } 00:10:35.664 ], 00:10:35.664 "driver_specific": { 00:10:35.664 "raid": { 00:10:35.664 "uuid": "1475e82a-d850-413b-833f-e83277c911fb", 00:10:35.664 "strip_size_kb": 64, 00:10:35.664 "state": "online", 00:10:35.664 "raid_level": "raid0", 00:10:35.664 "superblock": true, 00:10:35.664 "num_base_bdevs": 4, 00:10:35.664 "num_base_bdevs_discovered": 4, 00:10:35.664 "num_base_bdevs_operational": 4, 00:10:35.664 "base_bdevs_list": [ 00:10:35.664 { 00:10:35.664 "name": "pt1", 00:10:35.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.664 "is_configured": true, 00:10:35.664 "data_offset": 2048, 00:10:35.664 "data_size": 63488 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "name": "pt2", 00:10:35.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.664 "is_configured": true, 00:10:35.664 "data_offset": 2048, 00:10:35.664 "data_size": 63488 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "name": "pt3", 00:10:35.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:35.664 "is_configured": true, 00:10:35.664 "data_offset": 2048, 00:10:35.664 "data_size": 63488 00:10:35.664 }, 00:10:35.664 { 00:10:35.664 "name": "pt4", 00:10:35.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:35.664 "is_configured": true, 00:10:35.664 "data_offset": 2048, 00:10:35.664 "data_size": 63488 00:10:35.664 } 00:10:35.664 ] 00:10:35.664 } 00:10:35.664 } 00:10:35.664 }' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:35.664 pt2 00:10:35.664 pt3 00:10:35.664 pt4' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.664 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.924 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.924 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.924 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.924 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.924 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:35.925 [2024-09-28 08:48:13.766118] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1475e82a-d850-413b-833f-e83277c911fb '!=' 1475e82a-d850-413b-833f-e83277c911fb ']' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70731 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70731 ']' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70731 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70731 00:10:35.925 killing process with pid 70731 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70731' 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70731 00:10:35.925 [2024-09-28 08:48:13.827452] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.925 [2024-09-28 08:48:13.827541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.925 08:48:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70731 00:10:35.925 [2024-09-28 08:48:13.827619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.925 [2024-09-28 08:48:13.827629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:36.495 [2024-09-28 08:48:14.252265] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.873 08:48:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:37.873 00:10:37.873 real 0m5.340s 00:10:37.873 user 0m7.211s 00:10:37.873 sys 0m0.980s 00:10:37.873 08:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.873 ************************************ 00:10:37.873 END TEST raid_superblock_test 00:10:37.873 ************************************ 00:10:37.873 08:48:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 08:48:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:37.873 08:48:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:37.873 08:48:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.873 08:48:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 ************************************ 00:10:37.873 START TEST raid_read_error_test 00:10:37.873 ************************************ 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6GUXAiscxR 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70990 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70990 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 70990 ']' 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.873 08:48:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 [2024-09-28 08:48:15.751235] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:37.873 [2024-09-28 08:48:15.751481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:10:38.132 [2024-09-28 08:48:15.921119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.391 [2024-09-28 08:48:16.159655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.650 [2024-09-28 08:48:16.392551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.650 [2024-09-28 08:48:16.392588] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.650 BaseBdev1_malloc 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.650 true 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.650 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [2024-09-28 08:48:16.648741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.910 [2024-09-28 08:48:16.648808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.910 [2024-09-28 08:48:16.648842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:38.910 [2024-09-28 08:48:16.648853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.910 [2024-09-28 08:48:16.651229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.910 [2024-09-28 08:48:16.651267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.910 BaseBdev1 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 BaseBdev2_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 true 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [2024-09-28 08:48:16.749888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:38.910 [2024-09-28 08:48:16.749944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.910 [2024-09-28 08:48:16.749977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:38.910 [2024-09-28 08:48:16.749988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.910 [2024-09-28 08:48:16.752295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.910 [2024-09-28 08:48:16.752334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:38.910 BaseBdev2 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 BaseBdev3_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 true 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [2024-09-28 08:48:16.821300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:38.910 [2024-09-28 08:48:16.821349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.910 [2024-09-28 08:48:16.821366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:38.910 [2024-09-28 08:48:16.821377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.910 [2024-09-28 08:48:16.823771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.910 [2024-09-28 08:48:16.823806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:38.910 BaseBdev3 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 BaseBdev4_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 true 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.910 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [2024-09-28 08:48:16.892436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:38.910 [2024-09-28 08:48:16.892499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.911 [2024-09-28 08:48:16.892532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.911 [2024-09-28 08:48:16.892544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.911 [2024-09-28 08:48:16.894864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.911 [2024-09-28 08:48:16.894942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:38.911 BaseBdev4 00:10:38.911 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.911 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:38.911 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.911 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.170 [2024-09-28 08:48:16.904498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.170 [2024-09-28 08:48:16.906600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.170 [2024-09-28 08:48:16.906742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.170 [2024-09-28 08:48:16.906807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.170 [2024-09-28 08:48:16.907021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:39.170 [2024-09-28 08:48:16.907036] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.170 [2024-09-28 08:48:16.907309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.170 [2024-09-28 08:48:16.907474] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:39.170 [2024-09-28 08:48:16.907484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:39.170 [2024-09-28 08:48:16.907677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.170 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.171 "name": "raid_bdev1", 00:10:39.171 "uuid": "8bc8b122-b3dc-422e-9f82-7e0ca26802f0", 00:10:39.171 "strip_size_kb": 64, 00:10:39.171 "state": "online", 00:10:39.171 "raid_level": "raid0", 00:10:39.171 "superblock": true, 00:10:39.171 "num_base_bdevs": 4, 00:10:39.171 "num_base_bdevs_discovered": 4, 00:10:39.171 "num_base_bdevs_operational": 4, 00:10:39.171 "base_bdevs_list": [ 00:10:39.171 { 00:10:39.171 "name": "BaseBdev1", 00:10:39.171 "uuid": "85956376-a13f-5edc-ac21-ad48dd17f3cc", 00:10:39.171 "is_configured": true, 00:10:39.171 "data_offset": 2048, 00:10:39.171 "data_size": 63488 00:10:39.171 }, 00:10:39.171 { 00:10:39.171 "name": "BaseBdev2", 00:10:39.171 "uuid": "8b7fb53c-41e7-5e91-a969-a427d47b2564", 00:10:39.171 "is_configured": true, 00:10:39.171 "data_offset": 2048, 00:10:39.171 "data_size": 63488 00:10:39.171 }, 00:10:39.171 { 00:10:39.171 "name": "BaseBdev3", 00:10:39.171 "uuid": "f3f42c7f-e6af-5fd0-b513-78749e675997", 00:10:39.171 "is_configured": true, 00:10:39.171 "data_offset": 2048, 00:10:39.171 "data_size": 63488 00:10:39.171 }, 00:10:39.171 { 00:10:39.171 "name": "BaseBdev4", 00:10:39.171 "uuid": "c78a5b4d-c6ba-5916-997a-626d77e6099c", 00:10:39.171 "is_configured": true, 00:10:39.171 "data_offset": 2048, 00:10:39.171 "data_size": 63488 00:10:39.171 } 00:10:39.171 ] 00:10:39.171 }' 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.171 08:48:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.430 08:48:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.430 08:48:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.689 [2024-09-28 08:48:17.437029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.627 "name": "raid_bdev1", 00:10:40.627 "uuid": "8bc8b122-b3dc-422e-9f82-7e0ca26802f0", 00:10:40.627 "strip_size_kb": 64, 00:10:40.627 "state": "online", 00:10:40.627 "raid_level": "raid0", 00:10:40.627 "superblock": true, 00:10:40.627 "num_base_bdevs": 4, 00:10:40.627 "num_base_bdevs_discovered": 4, 00:10:40.627 "num_base_bdevs_operational": 4, 00:10:40.627 "base_bdevs_list": [ 00:10:40.627 { 00:10:40.627 "name": "BaseBdev1", 00:10:40.627 "uuid": "85956376-a13f-5edc-ac21-ad48dd17f3cc", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": "BaseBdev2", 00:10:40.627 "uuid": "8b7fb53c-41e7-5e91-a969-a427d47b2564", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": "BaseBdev3", 00:10:40.627 "uuid": "f3f42c7f-e6af-5fd0-b513-78749e675997", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": "BaseBdev4", 00:10:40.627 "uuid": "c78a5b4d-c6ba-5916-997a-626d77e6099c", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 } 00:10:40.627 ] 00:10:40.627 }' 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.627 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.886 [2024-09-28 08:48:18.825882] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:40.886 [2024-09-28 08:48:18.825921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.886 [2024-09-28 08:48:18.828679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.886 [2024-09-28 08:48:18.828738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.886 [2024-09-28 08:48:18.828784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.886 [2024-09-28 08:48:18.828796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:40.886 { 00:10:40.886 "results": [ 00:10:40.886 { 00:10:40.886 "job": "raid_bdev1", 00:10:40.886 "core_mask": "0x1", 00:10:40.886 "workload": "randrw", 00:10:40.886 "percentage": 50, 00:10:40.886 "status": "finished", 00:10:40.886 "queue_depth": 1, 00:10:40.886 "io_size": 131072, 00:10:40.886 "runtime": 1.389399, 00:10:40.886 "iops": 14201.82395409814, 00:10:40.886 "mibps": 1775.2279942622674, 00:10:40.886 "io_failed": 1, 00:10:40.886 "io_timeout": 0, 00:10:40.886 "avg_latency_us": 99.35863338892999, 00:10:40.886 "min_latency_us": 24.705676855895195, 00:10:40.886 "max_latency_us": 1452.380786026201 00:10:40.886 } 00:10:40.886 ], 00:10:40.886 "core_count": 1 00:10:40.886 } 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70990 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 70990 ']' 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 70990 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70990 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.886 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.887 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70990' 00:10:40.887 killing process with pid 70990 00:10:40.887 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 70990 00:10:40.887 [2024-09-28 08:48:18.874800] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.887 08:48:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 70990 00:10:41.455 [2024-09-28 08:48:19.213908] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6GUXAiscxR 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:42.833 00:10:42.833 real 0m4.967s 00:10:42.833 user 0m5.694s 00:10:42.833 sys 0m0.693s 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.833 08:48:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 ************************************ 00:10:42.833 END TEST raid_read_error_test 00:10:42.833 ************************************ 00:10:42.833 08:48:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:42.833 08:48:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.833 08:48:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.833 08:48:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.833 ************************************ 00:10:42.833 START TEST raid_write_error_test 00:10:42.833 ************************************ 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.53K5GoBH0W 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71136 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71136 00:10:42.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71136 ']' 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.833 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.834 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.834 08:48:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.834 [2024-09-28 08:48:20.794743] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:42.834 [2024-09-28 08:48:20.794873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71136 ] 00:10:43.093 [2024-09-28 08:48:20.963844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.354 [2024-09-28 08:48:21.202805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.613 [2024-09-28 08:48:21.432730] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.613 [2024-09-28 08:48:21.432765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 BaseBdev1_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 true 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 [2024-09-28 08:48:21.685587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.873 [2024-09-28 08:48:21.685646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.873 [2024-09-28 08:48:21.685676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.873 [2024-09-28 08:48:21.685688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.873 [2024-09-28 08:48:21.688121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.873 [2024-09-28 08:48:21.688160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.873 BaseBdev1 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 BaseBdev2_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.873 true 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.873 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.874 [2024-09-28 08:48:21.781137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.874 [2024-09-28 08:48:21.781190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.874 [2024-09-28 08:48:21.781224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.874 [2024-09-28 08:48:21.781235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.874 [2024-09-28 08:48:21.783565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.874 [2024-09-28 08:48:21.783660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.874 BaseBdev2 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.874 BaseBdev3_malloc 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.874 true 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.874 [2024-09-28 08:48:21.853469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:43.874 [2024-09-28 08:48:21.853518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.874 [2024-09-28 08:48:21.853550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:43.874 [2024-09-28 08:48:21.853561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.874 [2024-09-28 08:48:21.855911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.874 [2024-09-28 08:48:21.855947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:43.874 BaseBdev3 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.874 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.133 BaseBdev4_malloc 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.133 true 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.133 [2024-09-28 08:48:21.924490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:44.133 [2024-09-28 08:48:21.924544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.133 [2024-09-28 08:48:21.924577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:44.133 [2024-09-28 08:48:21.924588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.133 [2024-09-28 08:48:21.926914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.133 [2024-09-28 08:48:21.927005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:44.133 BaseBdev4 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.133 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.134 [2024-09-28 08:48:21.936551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.134 [2024-09-28 08:48:21.938626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.134 [2024-09-28 08:48:21.938718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.134 [2024-09-28 08:48:21.938779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.134 [2024-09-28 08:48:21.938991] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:44.134 [2024-09-28 08:48:21.939011] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.134 [2024-09-28 08:48:21.939255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:44.134 [2024-09-28 08:48:21.939417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:44.134 [2024-09-28 08:48:21.939425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:44.134 [2024-09-28 08:48:21.939575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.134 "name": "raid_bdev1", 00:10:44.134 "uuid": "5c214577-0036-46b0-a245-dfcc6f66b572", 00:10:44.134 "strip_size_kb": 64, 00:10:44.134 "state": "online", 00:10:44.134 "raid_level": "raid0", 00:10:44.134 "superblock": true, 00:10:44.134 "num_base_bdevs": 4, 00:10:44.134 "num_base_bdevs_discovered": 4, 00:10:44.134 "num_base_bdevs_operational": 4, 00:10:44.134 "base_bdevs_list": [ 00:10:44.134 { 00:10:44.134 "name": "BaseBdev1", 00:10:44.134 "uuid": "d4eb9544-a906-5cef-8bcb-17d0359713c7", 00:10:44.134 "is_configured": true, 00:10:44.134 "data_offset": 2048, 00:10:44.134 "data_size": 63488 00:10:44.134 }, 00:10:44.134 { 00:10:44.134 "name": "BaseBdev2", 00:10:44.134 "uuid": "9f77f1ef-d4fa-5006-80f5-86b532886776", 00:10:44.134 "is_configured": true, 00:10:44.134 "data_offset": 2048, 00:10:44.134 "data_size": 63488 00:10:44.134 }, 00:10:44.134 { 00:10:44.134 "name": "BaseBdev3", 00:10:44.134 "uuid": "378ab305-ffe1-5ba9-89b3-0cab4f3ed0ed", 00:10:44.134 "is_configured": true, 00:10:44.134 "data_offset": 2048, 00:10:44.134 "data_size": 63488 00:10:44.134 }, 00:10:44.134 { 00:10:44.134 "name": "BaseBdev4", 00:10:44.134 "uuid": "be32527c-b476-5dcc-ac51-a6d3ae77daca", 00:10:44.134 "is_configured": true, 00:10:44.134 "data_offset": 2048, 00:10:44.134 "data_size": 63488 00:10:44.134 } 00:10:44.134 ] 00:10:44.134 }' 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.134 08:48:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.393 08:48:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.393 08:48:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:44.652 [2024-09-28 08:48:22.433171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.592 "name": "raid_bdev1", 00:10:45.592 "uuid": "5c214577-0036-46b0-a245-dfcc6f66b572", 00:10:45.592 "strip_size_kb": 64, 00:10:45.592 "state": "online", 00:10:45.592 "raid_level": "raid0", 00:10:45.592 "superblock": true, 00:10:45.592 "num_base_bdevs": 4, 00:10:45.592 "num_base_bdevs_discovered": 4, 00:10:45.592 "num_base_bdevs_operational": 4, 00:10:45.592 "base_bdevs_list": [ 00:10:45.592 { 00:10:45.592 "name": "BaseBdev1", 00:10:45.592 "uuid": "d4eb9544-a906-5cef-8bcb-17d0359713c7", 00:10:45.592 "is_configured": true, 00:10:45.592 "data_offset": 2048, 00:10:45.592 "data_size": 63488 00:10:45.592 }, 00:10:45.592 { 00:10:45.592 "name": "BaseBdev2", 00:10:45.592 "uuid": "9f77f1ef-d4fa-5006-80f5-86b532886776", 00:10:45.592 "is_configured": true, 00:10:45.592 "data_offset": 2048, 00:10:45.592 "data_size": 63488 00:10:45.592 }, 00:10:45.592 { 00:10:45.592 "name": "BaseBdev3", 00:10:45.592 "uuid": "378ab305-ffe1-5ba9-89b3-0cab4f3ed0ed", 00:10:45.592 "is_configured": true, 00:10:45.592 "data_offset": 2048, 00:10:45.592 "data_size": 63488 00:10:45.592 }, 00:10:45.592 { 00:10:45.592 "name": "BaseBdev4", 00:10:45.592 "uuid": "be32527c-b476-5dcc-ac51-a6d3ae77daca", 00:10:45.592 "is_configured": true, 00:10:45.592 "data_offset": 2048, 00:10:45.592 "data_size": 63488 00:10:45.592 } 00:10:45.592 ] 00:10:45.592 }' 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.592 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.852 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.852 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.852 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.852 [2024-09-28 08:48:23.789924] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.852 [2024-09-28 08:48:23.790029] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.852 [2024-09-28 08:48:23.792737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.852 [2024-09-28 08:48:23.792796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.852 [2024-09-28 08:48:23.792845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.852 [2024-09-28 08:48:23.792858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:45.852 { 00:10:45.853 "results": [ 00:10:45.853 { 00:10:45.853 "job": "raid_bdev1", 00:10:45.853 "core_mask": "0x1", 00:10:45.853 "workload": "randrw", 00:10:45.853 "percentage": 50, 00:10:45.853 "status": "finished", 00:10:45.853 "queue_depth": 1, 00:10:45.853 "io_size": 131072, 00:10:45.853 "runtime": 1.357326, 00:10:45.853 "iops": 14219.13379689183, 00:10:45.853 "mibps": 1777.3917246114788, 00:10:45.853 "io_failed": 1, 00:10:45.853 "io_timeout": 0, 00:10:45.853 "avg_latency_us": 99.19452733290514, 00:10:45.853 "min_latency_us": 25.2646288209607, 00:10:45.853 "max_latency_us": 1402.2986899563318 00:10:45.853 } 00:10:45.853 ], 00:10:45.853 "core_count": 1 00:10:45.853 } 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71136 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71136 ']' 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71136 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71136 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.853 killing process with pid 71136 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71136' 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71136 00:10:45.853 [2024-09-28 08:48:23.829501] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.853 08:48:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71136 00:10:46.422 [2024-09-28 08:48:24.173867] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.53K5GoBH0W 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:47.842 ************************************ 00:10:47.842 END TEST raid_write_error_test 00:10:47.842 ************************************ 00:10:47.842 00:10:47.842 real 0m4.887s 00:10:47.842 user 0m5.526s 00:10:47.842 sys 0m0.697s 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.842 08:48:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.842 08:48:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:47.842 08:48:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:47.842 08:48:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.842 08:48:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.842 08:48:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.842 ************************************ 00:10:47.842 START TEST raid_state_function_test 00:10:47.842 ************************************ 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:47.842 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71285 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71285' 00:10:47.843 Process raid pid: 71285 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71285 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71285 ']' 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.843 08:48:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.843 [2024-09-28 08:48:25.747277] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:47.843 [2024-09-28 08:48:25.747525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.102 [2024-09-28 08:48:25.918107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.362 [2024-09-28 08:48:26.166102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.622 [2024-09-28 08:48:26.408116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.622 [2024-09-28 08:48:26.408213] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.622 [2024-09-28 08:48:26.586080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.622 [2024-09-28 08:48:26.586138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.622 [2024-09-28 08:48:26.586148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.622 [2024-09-28 08:48:26.586158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.622 [2024-09-28 08:48:26.586164] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.622 [2024-09-28 08:48:26.586174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.622 [2024-09-28 08:48:26.586180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.622 [2024-09-28 08:48:26.586189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.622 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.882 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.882 "name": "Existed_Raid", 00:10:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.882 "strip_size_kb": 64, 00:10:48.882 "state": "configuring", 00:10:48.882 "raid_level": "concat", 00:10:48.882 "superblock": false, 00:10:48.882 "num_base_bdevs": 4, 00:10:48.882 "num_base_bdevs_discovered": 0, 00:10:48.882 "num_base_bdevs_operational": 4, 00:10:48.882 "base_bdevs_list": [ 00:10:48.882 { 00:10:48.882 "name": "BaseBdev1", 00:10:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.882 "is_configured": false, 00:10:48.882 "data_offset": 0, 00:10:48.882 "data_size": 0 00:10:48.882 }, 00:10:48.882 { 00:10:48.882 "name": "BaseBdev2", 00:10:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.882 "is_configured": false, 00:10:48.882 "data_offset": 0, 00:10:48.882 "data_size": 0 00:10:48.882 }, 00:10:48.882 { 00:10:48.882 "name": "BaseBdev3", 00:10:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.882 "is_configured": false, 00:10:48.882 "data_offset": 0, 00:10:48.882 "data_size": 0 00:10:48.882 }, 00:10:48.882 { 00:10:48.882 "name": "BaseBdev4", 00:10:48.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.882 "is_configured": false, 00:10:48.882 "data_offset": 0, 00:10:48.882 "data_size": 0 00:10:48.882 } 00:10:48.882 ] 00:10:48.882 }' 00:10:48.882 08:48:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.882 08:48:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.142 [2024-09-28 08:48:27.053180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.142 [2024-09-28 08:48:27.053268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.142 [2024-09-28 08:48:27.065179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.142 [2024-09-28 08:48:27.065257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.142 [2024-09-28 08:48:27.065300] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.142 [2024-09-28 08:48:27.065323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.142 [2024-09-28 08:48:27.065341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.142 [2024-09-28 08:48:27.065362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.142 [2024-09-28 08:48:27.065380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.142 [2024-09-28 08:48:27.065401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.142 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.402 [2024-09-28 08:48:27.151587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.402 BaseBdev1 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.402 [ 00:10:49.402 { 00:10:49.402 "name": "BaseBdev1", 00:10:49.402 "aliases": [ 00:10:49.402 "96216331-9a76-4509-9cf7-28b3e90ef513" 00:10:49.402 ], 00:10:49.402 "product_name": "Malloc disk", 00:10:49.402 "block_size": 512, 00:10:49.402 "num_blocks": 65536, 00:10:49.402 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:49.402 "assigned_rate_limits": { 00:10:49.402 "rw_ios_per_sec": 0, 00:10:49.402 "rw_mbytes_per_sec": 0, 00:10:49.402 "r_mbytes_per_sec": 0, 00:10:49.402 "w_mbytes_per_sec": 0 00:10:49.402 }, 00:10:49.402 "claimed": true, 00:10:49.402 "claim_type": "exclusive_write", 00:10:49.402 "zoned": false, 00:10:49.402 "supported_io_types": { 00:10:49.402 "read": true, 00:10:49.402 "write": true, 00:10:49.402 "unmap": true, 00:10:49.402 "flush": true, 00:10:49.402 "reset": true, 00:10:49.402 "nvme_admin": false, 00:10:49.402 "nvme_io": false, 00:10:49.402 "nvme_io_md": false, 00:10:49.402 "write_zeroes": true, 00:10:49.402 "zcopy": true, 00:10:49.402 "get_zone_info": false, 00:10:49.402 "zone_management": false, 00:10:49.402 "zone_append": false, 00:10:49.402 "compare": false, 00:10:49.402 "compare_and_write": false, 00:10:49.402 "abort": true, 00:10:49.402 "seek_hole": false, 00:10:49.402 "seek_data": false, 00:10:49.402 "copy": true, 00:10:49.402 "nvme_iov_md": false 00:10:49.402 }, 00:10:49.402 "memory_domains": [ 00:10:49.402 { 00:10:49.402 "dma_device_id": "system", 00:10:49.402 "dma_device_type": 1 00:10:49.402 }, 00:10:49.402 { 00:10:49.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.402 "dma_device_type": 2 00:10:49.402 } 00:10:49.402 ], 00:10:49.402 "driver_specific": {} 00:10:49.402 } 00:10:49.402 ] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.402 "name": "Existed_Raid", 00:10:49.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.402 "strip_size_kb": 64, 00:10:49.402 "state": "configuring", 00:10:49.402 "raid_level": "concat", 00:10:49.402 "superblock": false, 00:10:49.402 "num_base_bdevs": 4, 00:10:49.402 "num_base_bdevs_discovered": 1, 00:10:49.402 "num_base_bdevs_operational": 4, 00:10:49.402 "base_bdevs_list": [ 00:10:49.402 { 00:10:49.402 "name": "BaseBdev1", 00:10:49.402 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:49.402 "is_configured": true, 00:10:49.402 "data_offset": 0, 00:10:49.402 "data_size": 65536 00:10:49.402 }, 00:10:49.402 { 00:10:49.402 "name": "BaseBdev2", 00:10:49.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.402 "is_configured": false, 00:10:49.402 "data_offset": 0, 00:10:49.402 "data_size": 0 00:10:49.402 }, 00:10:49.402 { 00:10:49.402 "name": "BaseBdev3", 00:10:49.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.402 "is_configured": false, 00:10:49.402 "data_offset": 0, 00:10:49.402 "data_size": 0 00:10:49.402 }, 00:10:49.402 { 00:10:49.402 "name": "BaseBdev4", 00:10:49.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.402 "is_configured": false, 00:10:49.402 "data_offset": 0, 00:10:49.402 "data_size": 0 00:10:49.402 } 00:10:49.402 ] 00:10:49.402 }' 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.402 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.662 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.662 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.663 [2024-09-28 08:48:27.602871] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.663 [2024-09-28 08:48:27.602976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.663 [2024-09-28 08:48:27.610902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.663 [2024-09-28 08:48:27.612948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.663 [2024-09-28 08:48:27.612989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.663 [2024-09-28 08:48:27.612998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.663 [2024-09-28 08:48:27.613010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.663 [2024-09-28 08:48:27.613017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.663 [2024-09-28 08:48:27.613025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.663 "name": "Existed_Raid", 00:10:49.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.663 "strip_size_kb": 64, 00:10:49.663 "state": "configuring", 00:10:49.663 "raid_level": "concat", 00:10:49.663 "superblock": false, 00:10:49.663 "num_base_bdevs": 4, 00:10:49.663 "num_base_bdevs_discovered": 1, 00:10:49.663 "num_base_bdevs_operational": 4, 00:10:49.663 "base_bdevs_list": [ 00:10:49.663 { 00:10:49.663 "name": "BaseBdev1", 00:10:49.663 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:49.663 "is_configured": true, 00:10:49.663 "data_offset": 0, 00:10:49.663 "data_size": 65536 00:10:49.663 }, 00:10:49.663 { 00:10:49.663 "name": "BaseBdev2", 00:10:49.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.663 "is_configured": false, 00:10:49.663 "data_offset": 0, 00:10:49.663 "data_size": 0 00:10:49.663 }, 00:10:49.663 { 00:10:49.663 "name": "BaseBdev3", 00:10:49.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.663 "is_configured": false, 00:10:49.663 "data_offset": 0, 00:10:49.663 "data_size": 0 00:10:49.663 }, 00:10:49.663 { 00:10:49.663 "name": "BaseBdev4", 00:10:49.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.663 "is_configured": false, 00:10:49.663 "data_offset": 0, 00:10:49.663 "data_size": 0 00:10:49.663 } 00:10:49.663 ] 00:10:49.663 }' 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.663 08:48:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 [2024-09-28 08:48:28.069406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.234 BaseBdev2 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 [ 00:10:50.234 { 00:10:50.234 "name": "BaseBdev2", 00:10:50.234 "aliases": [ 00:10:50.234 "0a37d573-e64e-484c-9f9e-6bc5218fdc4e" 00:10:50.234 ], 00:10:50.234 "product_name": "Malloc disk", 00:10:50.234 "block_size": 512, 00:10:50.234 "num_blocks": 65536, 00:10:50.234 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:50.234 "assigned_rate_limits": { 00:10:50.234 "rw_ios_per_sec": 0, 00:10:50.234 "rw_mbytes_per_sec": 0, 00:10:50.234 "r_mbytes_per_sec": 0, 00:10:50.234 "w_mbytes_per_sec": 0 00:10:50.234 }, 00:10:50.234 "claimed": true, 00:10:50.234 "claim_type": "exclusive_write", 00:10:50.234 "zoned": false, 00:10:50.234 "supported_io_types": { 00:10:50.234 "read": true, 00:10:50.234 "write": true, 00:10:50.234 "unmap": true, 00:10:50.234 "flush": true, 00:10:50.234 "reset": true, 00:10:50.234 "nvme_admin": false, 00:10:50.234 "nvme_io": false, 00:10:50.234 "nvme_io_md": false, 00:10:50.234 "write_zeroes": true, 00:10:50.234 "zcopy": true, 00:10:50.234 "get_zone_info": false, 00:10:50.234 "zone_management": false, 00:10:50.234 "zone_append": false, 00:10:50.234 "compare": false, 00:10:50.234 "compare_and_write": false, 00:10:50.234 "abort": true, 00:10:50.234 "seek_hole": false, 00:10:50.234 "seek_data": false, 00:10:50.234 "copy": true, 00:10:50.234 "nvme_iov_md": false 00:10:50.234 }, 00:10:50.234 "memory_domains": [ 00:10:50.234 { 00:10:50.234 "dma_device_id": "system", 00:10:50.234 "dma_device_type": 1 00:10:50.234 }, 00:10:50.234 { 00:10:50.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.234 "dma_device_type": 2 00:10:50.234 } 00:10:50.234 ], 00:10:50.234 "driver_specific": {} 00:10:50.234 } 00:10:50.234 ] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.234 "name": "Existed_Raid", 00:10:50.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.234 "strip_size_kb": 64, 00:10:50.234 "state": "configuring", 00:10:50.234 "raid_level": "concat", 00:10:50.234 "superblock": false, 00:10:50.234 "num_base_bdevs": 4, 00:10:50.234 "num_base_bdevs_discovered": 2, 00:10:50.234 "num_base_bdevs_operational": 4, 00:10:50.234 "base_bdevs_list": [ 00:10:50.234 { 00:10:50.234 "name": "BaseBdev1", 00:10:50.234 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:50.234 "is_configured": true, 00:10:50.234 "data_offset": 0, 00:10:50.234 "data_size": 65536 00:10:50.234 }, 00:10:50.234 { 00:10:50.234 "name": "BaseBdev2", 00:10:50.234 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:50.234 "is_configured": true, 00:10:50.234 "data_offset": 0, 00:10:50.234 "data_size": 65536 00:10:50.234 }, 00:10:50.234 { 00:10:50.234 "name": "BaseBdev3", 00:10:50.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.234 "is_configured": false, 00:10:50.234 "data_offset": 0, 00:10:50.234 "data_size": 0 00:10:50.234 }, 00:10:50.234 { 00:10:50.234 "name": "BaseBdev4", 00:10:50.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.234 "is_configured": false, 00:10:50.234 "data_offset": 0, 00:10:50.234 "data_size": 0 00:10:50.234 } 00:10:50.234 ] 00:10:50.234 }' 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.234 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.804 [2024-09-28 08:48:28.577079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.804 BaseBdev3 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.804 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.804 [ 00:10:50.804 { 00:10:50.804 "name": "BaseBdev3", 00:10:50.804 "aliases": [ 00:10:50.804 "4f2c8791-fe02-485a-9382-9edffd1a9e0f" 00:10:50.804 ], 00:10:50.804 "product_name": "Malloc disk", 00:10:50.804 "block_size": 512, 00:10:50.804 "num_blocks": 65536, 00:10:50.804 "uuid": "4f2c8791-fe02-485a-9382-9edffd1a9e0f", 00:10:50.804 "assigned_rate_limits": { 00:10:50.804 "rw_ios_per_sec": 0, 00:10:50.804 "rw_mbytes_per_sec": 0, 00:10:50.804 "r_mbytes_per_sec": 0, 00:10:50.804 "w_mbytes_per_sec": 0 00:10:50.804 }, 00:10:50.804 "claimed": true, 00:10:50.804 "claim_type": "exclusive_write", 00:10:50.804 "zoned": false, 00:10:50.804 "supported_io_types": { 00:10:50.804 "read": true, 00:10:50.804 "write": true, 00:10:50.805 "unmap": true, 00:10:50.805 "flush": true, 00:10:50.805 "reset": true, 00:10:50.805 "nvme_admin": false, 00:10:50.805 "nvme_io": false, 00:10:50.805 "nvme_io_md": false, 00:10:50.805 "write_zeroes": true, 00:10:50.805 "zcopy": true, 00:10:50.805 "get_zone_info": false, 00:10:50.805 "zone_management": false, 00:10:50.805 "zone_append": false, 00:10:50.805 "compare": false, 00:10:50.805 "compare_and_write": false, 00:10:50.805 "abort": true, 00:10:50.805 "seek_hole": false, 00:10:50.805 "seek_data": false, 00:10:50.805 "copy": true, 00:10:50.805 "nvme_iov_md": false 00:10:50.805 }, 00:10:50.805 "memory_domains": [ 00:10:50.805 { 00:10:50.805 "dma_device_id": "system", 00:10:50.805 "dma_device_type": 1 00:10:50.805 }, 00:10:50.805 { 00:10:50.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.805 "dma_device_type": 2 00:10:50.805 } 00:10:50.805 ], 00:10:50.805 "driver_specific": {} 00:10:50.805 } 00:10:50.805 ] 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.805 "name": "Existed_Raid", 00:10:50.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.805 "strip_size_kb": 64, 00:10:50.805 "state": "configuring", 00:10:50.805 "raid_level": "concat", 00:10:50.805 "superblock": false, 00:10:50.805 "num_base_bdevs": 4, 00:10:50.805 "num_base_bdevs_discovered": 3, 00:10:50.805 "num_base_bdevs_operational": 4, 00:10:50.805 "base_bdevs_list": [ 00:10:50.805 { 00:10:50.805 "name": "BaseBdev1", 00:10:50.805 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:50.805 "is_configured": true, 00:10:50.805 "data_offset": 0, 00:10:50.805 "data_size": 65536 00:10:50.805 }, 00:10:50.805 { 00:10:50.805 "name": "BaseBdev2", 00:10:50.805 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:50.805 "is_configured": true, 00:10:50.805 "data_offset": 0, 00:10:50.805 "data_size": 65536 00:10:50.805 }, 00:10:50.805 { 00:10:50.805 "name": "BaseBdev3", 00:10:50.805 "uuid": "4f2c8791-fe02-485a-9382-9edffd1a9e0f", 00:10:50.805 "is_configured": true, 00:10:50.805 "data_offset": 0, 00:10:50.805 "data_size": 65536 00:10:50.805 }, 00:10:50.805 { 00:10:50.805 "name": "BaseBdev4", 00:10:50.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.805 "is_configured": false, 00:10:50.805 "data_offset": 0, 00:10:50.805 "data_size": 0 00:10:50.805 } 00:10:50.805 ] 00:10:50.805 }' 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.805 08:48:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.065 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.065 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.065 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.325 [2024-09-28 08:48:29.071520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.325 [2024-09-28 08:48:29.071689] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.325 [2024-09-28 08:48:29.071719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:51.325 [2024-09-28 08:48:29.072048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:51.325 [2024-09-28 08:48:29.072271] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.325 [2024-09-28 08:48:29.072319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.325 [2024-09-28 08:48:29.072644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.325 BaseBdev4 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.325 [ 00:10:51.325 { 00:10:51.325 "name": "BaseBdev4", 00:10:51.325 "aliases": [ 00:10:51.325 "a3a1cb23-83a4-4bf3-8194-fdce067c1031" 00:10:51.325 ], 00:10:51.325 "product_name": "Malloc disk", 00:10:51.325 "block_size": 512, 00:10:51.325 "num_blocks": 65536, 00:10:51.325 "uuid": "a3a1cb23-83a4-4bf3-8194-fdce067c1031", 00:10:51.325 "assigned_rate_limits": { 00:10:51.325 "rw_ios_per_sec": 0, 00:10:51.325 "rw_mbytes_per_sec": 0, 00:10:51.325 "r_mbytes_per_sec": 0, 00:10:51.325 "w_mbytes_per_sec": 0 00:10:51.325 }, 00:10:51.325 "claimed": true, 00:10:51.325 "claim_type": "exclusive_write", 00:10:51.325 "zoned": false, 00:10:51.325 "supported_io_types": { 00:10:51.325 "read": true, 00:10:51.325 "write": true, 00:10:51.325 "unmap": true, 00:10:51.325 "flush": true, 00:10:51.325 "reset": true, 00:10:51.325 "nvme_admin": false, 00:10:51.325 "nvme_io": false, 00:10:51.325 "nvme_io_md": false, 00:10:51.325 "write_zeroes": true, 00:10:51.325 "zcopy": true, 00:10:51.325 "get_zone_info": false, 00:10:51.325 "zone_management": false, 00:10:51.325 "zone_append": false, 00:10:51.325 "compare": false, 00:10:51.325 "compare_and_write": false, 00:10:51.325 "abort": true, 00:10:51.325 "seek_hole": false, 00:10:51.325 "seek_data": false, 00:10:51.325 "copy": true, 00:10:51.325 "nvme_iov_md": false 00:10:51.325 }, 00:10:51.325 "memory_domains": [ 00:10:51.325 { 00:10:51.325 "dma_device_id": "system", 00:10:51.325 "dma_device_type": 1 00:10:51.325 }, 00:10:51.325 { 00:10:51.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.325 "dma_device_type": 2 00:10:51.325 } 00:10:51.325 ], 00:10:51.325 "driver_specific": {} 00:10:51.325 } 00:10:51.325 ] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.325 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.325 "name": "Existed_Raid", 00:10:51.325 "uuid": "a943d6bf-4cb5-4bd5-a650-8a96826b183d", 00:10:51.325 "strip_size_kb": 64, 00:10:51.325 "state": "online", 00:10:51.325 "raid_level": "concat", 00:10:51.325 "superblock": false, 00:10:51.325 "num_base_bdevs": 4, 00:10:51.325 "num_base_bdevs_discovered": 4, 00:10:51.325 "num_base_bdevs_operational": 4, 00:10:51.326 "base_bdevs_list": [ 00:10:51.326 { 00:10:51.326 "name": "BaseBdev1", 00:10:51.326 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:51.326 "is_configured": true, 00:10:51.326 "data_offset": 0, 00:10:51.326 "data_size": 65536 00:10:51.326 }, 00:10:51.326 { 00:10:51.326 "name": "BaseBdev2", 00:10:51.326 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:51.326 "is_configured": true, 00:10:51.326 "data_offset": 0, 00:10:51.326 "data_size": 65536 00:10:51.326 }, 00:10:51.326 { 00:10:51.326 "name": "BaseBdev3", 00:10:51.326 "uuid": "4f2c8791-fe02-485a-9382-9edffd1a9e0f", 00:10:51.326 "is_configured": true, 00:10:51.326 "data_offset": 0, 00:10:51.326 "data_size": 65536 00:10:51.326 }, 00:10:51.326 { 00:10:51.326 "name": "BaseBdev4", 00:10:51.326 "uuid": "a3a1cb23-83a4-4bf3-8194-fdce067c1031", 00:10:51.326 "is_configured": true, 00:10:51.326 "data_offset": 0, 00:10:51.326 "data_size": 65536 00:10:51.326 } 00:10:51.326 ] 00:10:51.326 }' 00:10:51.326 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.326 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.585 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.585 [2024-09-28 08:48:29.575054] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.846 "name": "Existed_Raid", 00:10:51.846 "aliases": [ 00:10:51.846 "a943d6bf-4cb5-4bd5-a650-8a96826b183d" 00:10:51.846 ], 00:10:51.846 "product_name": "Raid Volume", 00:10:51.846 "block_size": 512, 00:10:51.846 "num_blocks": 262144, 00:10:51.846 "uuid": "a943d6bf-4cb5-4bd5-a650-8a96826b183d", 00:10:51.846 "assigned_rate_limits": { 00:10:51.846 "rw_ios_per_sec": 0, 00:10:51.846 "rw_mbytes_per_sec": 0, 00:10:51.846 "r_mbytes_per_sec": 0, 00:10:51.846 "w_mbytes_per_sec": 0 00:10:51.846 }, 00:10:51.846 "claimed": false, 00:10:51.846 "zoned": false, 00:10:51.846 "supported_io_types": { 00:10:51.846 "read": true, 00:10:51.846 "write": true, 00:10:51.846 "unmap": true, 00:10:51.846 "flush": true, 00:10:51.846 "reset": true, 00:10:51.846 "nvme_admin": false, 00:10:51.846 "nvme_io": false, 00:10:51.846 "nvme_io_md": false, 00:10:51.846 "write_zeroes": true, 00:10:51.846 "zcopy": false, 00:10:51.846 "get_zone_info": false, 00:10:51.846 "zone_management": false, 00:10:51.846 "zone_append": false, 00:10:51.846 "compare": false, 00:10:51.846 "compare_and_write": false, 00:10:51.846 "abort": false, 00:10:51.846 "seek_hole": false, 00:10:51.846 "seek_data": false, 00:10:51.846 "copy": false, 00:10:51.846 "nvme_iov_md": false 00:10:51.846 }, 00:10:51.846 "memory_domains": [ 00:10:51.846 { 00:10:51.846 "dma_device_id": "system", 00:10:51.846 "dma_device_type": 1 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.846 "dma_device_type": 2 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "system", 00:10:51.846 "dma_device_type": 1 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.846 "dma_device_type": 2 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "system", 00:10:51.846 "dma_device_type": 1 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.846 "dma_device_type": 2 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "system", 00:10:51.846 "dma_device_type": 1 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.846 "dma_device_type": 2 00:10:51.846 } 00:10:51.846 ], 00:10:51.846 "driver_specific": { 00:10:51.846 "raid": { 00:10:51.846 "uuid": "a943d6bf-4cb5-4bd5-a650-8a96826b183d", 00:10:51.846 "strip_size_kb": 64, 00:10:51.846 "state": "online", 00:10:51.846 "raid_level": "concat", 00:10:51.846 "superblock": false, 00:10:51.846 "num_base_bdevs": 4, 00:10:51.846 "num_base_bdevs_discovered": 4, 00:10:51.846 "num_base_bdevs_operational": 4, 00:10:51.846 "base_bdevs_list": [ 00:10:51.846 { 00:10:51.846 "name": "BaseBdev1", 00:10:51.846 "uuid": "96216331-9a76-4509-9cf7-28b3e90ef513", 00:10:51.846 "is_configured": true, 00:10:51.846 "data_offset": 0, 00:10:51.846 "data_size": 65536 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "name": "BaseBdev2", 00:10:51.846 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:51.846 "is_configured": true, 00:10:51.846 "data_offset": 0, 00:10:51.846 "data_size": 65536 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "name": "BaseBdev3", 00:10:51.846 "uuid": "4f2c8791-fe02-485a-9382-9edffd1a9e0f", 00:10:51.846 "is_configured": true, 00:10:51.846 "data_offset": 0, 00:10:51.846 "data_size": 65536 00:10:51.846 }, 00:10:51.846 { 00:10:51.846 "name": "BaseBdev4", 00:10:51.846 "uuid": "a3a1cb23-83a4-4bf3-8194-fdce067c1031", 00:10:51.846 "is_configured": true, 00:10:51.846 "data_offset": 0, 00:10:51.846 "data_size": 65536 00:10:51.846 } 00:10:51.846 ] 00:10:51.846 } 00:10:51.846 } 00:10:51.846 }' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.846 BaseBdev2 00:10:51.846 BaseBdev3 00:10:51.846 BaseBdev4' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.846 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.107 [2024-09-28 08:48:29.870225] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.107 [2024-09-28 08:48:29.870257] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.107 [2024-09-28 08:48:29.870312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.107 08:48:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.107 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.107 "name": "Existed_Raid", 00:10:52.107 "uuid": "a943d6bf-4cb5-4bd5-a650-8a96826b183d", 00:10:52.107 "strip_size_kb": 64, 00:10:52.107 "state": "offline", 00:10:52.107 "raid_level": "concat", 00:10:52.107 "superblock": false, 00:10:52.107 "num_base_bdevs": 4, 00:10:52.107 "num_base_bdevs_discovered": 3, 00:10:52.107 "num_base_bdevs_operational": 3, 00:10:52.107 "base_bdevs_list": [ 00:10:52.107 { 00:10:52.107 "name": null, 00:10:52.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.107 "is_configured": false, 00:10:52.107 "data_offset": 0, 00:10:52.107 "data_size": 65536 00:10:52.107 }, 00:10:52.107 { 00:10:52.107 "name": "BaseBdev2", 00:10:52.107 "uuid": "0a37d573-e64e-484c-9f9e-6bc5218fdc4e", 00:10:52.107 "is_configured": true, 00:10:52.107 "data_offset": 0, 00:10:52.107 "data_size": 65536 00:10:52.107 }, 00:10:52.107 { 00:10:52.107 "name": "BaseBdev3", 00:10:52.107 "uuid": "4f2c8791-fe02-485a-9382-9edffd1a9e0f", 00:10:52.107 "is_configured": true, 00:10:52.107 "data_offset": 0, 00:10:52.107 "data_size": 65536 00:10:52.107 }, 00:10:52.107 { 00:10:52.107 "name": "BaseBdev4", 00:10:52.107 "uuid": "a3a1cb23-83a4-4bf3-8194-fdce067c1031", 00:10:52.107 "is_configured": true, 00:10:52.107 "data_offset": 0, 00:10:52.107 "data_size": 65536 00:10:52.107 } 00:10:52.107 ] 00:10:52.107 }' 00:10:52.107 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.107 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.678 [2024-09-28 08:48:30.450348] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.678 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.679 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.679 [2024-09-28 08:48:30.612305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.939 [2024-09-28 08:48:30.777119] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:52.939 [2024-09-28 08:48:30.777223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.939 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 BaseBdev2 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 [ 00:10:53.199 { 00:10:53.199 "name": "BaseBdev2", 00:10:53.199 "aliases": [ 00:10:53.199 "6db154ee-0d00-404e-a9f9-4d044bdae537" 00:10:53.199 ], 00:10:53.199 "product_name": "Malloc disk", 00:10:53.199 "block_size": 512, 00:10:53.199 "num_blocks": 65536, 00:10:53.199 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:53.199 "assigned_rate_limits": { 00:10:53.199 "rw_ios_per_sec": 0, 00:10:53.199 "rw_mbytes_per_sec": 0, 00:10:53.199 "r_mbytes_per_sec": 0, 00:10:53.199 "w_mbytes_per_sec": 0 00:10:53.199 }, 00:10:53.199 "claimed": false, 00:10:53.199 "zoned": false, 00:10:53.199 "supported_io_types": { 00:10:53.199 "read": true, 00:10:53.199 "write": true, 00:10:53.199 "unmap": true, 00:10:53.199 "flush": true, 00:10:53.199 "reset": true, 00:10:53.199 "nvme_admin": false, 00:10:53.199 "nvme_io": false, 00:10:53.199 "nvme_io_md": false, 00:10:53.199 "write_zeroes": true, 00:10:53.199 "zcopy": true, 00:10:53.199 "get_zone_info": false, 00:10:53.199 "zone_management": false, 00:10:53.199 "zone_append": false, 00:10:53.199 "compare": false, 00:10:53.199 "compare_and_write": false, 00:10:53.199 "abort": true, 00:10:53.199 "seek_hole": false, 00:10:53.199 "seek_data": false, 00:10:53.199 "copy": true, 00:10:53.199 "nvme_iov_md": false 00:10:53.199 }, 00:10:53.199 "memory_domains": [ 00:10:53.199 { 00:10:53.199 "dma_device_id": "system", 00:10:53.199 "dma_device_type": 1 00:10:53.199 }, 00:10:53.199 { 00:10:53.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.199 "dma_device_type": 2 00:10:53.199 } 00:10:53.199 ], 00:10:53.199 "driver_specific": {} 00:10:53.199 } 00:10:53.199 ] 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 BaseBdev3 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.199 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.199 [ 00:10:53.199 { 00:10:53.199 "name": "BaseBdev3", 00:10:53.199 "aliases": [ 00:10:53.199 "4af10eda-d371-440c-ba9f-f0bf64c5eca9" 00:10:53.199 ], 00:10:53.199 "product_name": "Malloc disk", 00:10:53.199 "block_size": 512, 00:10:53.199 "num_blocks": 65536, 00:10:53.199 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:53.199 "assigned_rate_limits": { 00:10:53.199 "rw_ios_per_sec": 0, 00:10:53.199 "rw_mbytes_per_sec": 0, 00:10:53.199 "r_mbytes_per_sec": 0, 00:10:53.199 "w_mbytes_per_sec": 0 00:10:53.199 }, 00:10:53.199 "claimed": false, 00:10:53.199 "zoned": false, 00:10:53.199 "supported_io_types": { 00:10:53.199 "read": true, 00:10:53.199 "write": true, 00:10:53.200 "unmap": true, 00:10:53.200 "flush": true, 00:10:53.200 "reset": true, 00:10:53.200 "nvme_admin": false, 00:10:53.200 "nvme_io": false, 00:10:53.200 "nvme_io_md": false, 00:10:53.200 "write_zeroes": true, 00:10:53.200 "zcopy": true, 00:10:53.200 "get_zone_info": false, 00:10:53.200 "zone_management": false, 00:10:53.200 "zone_append": false, 00:10:53.200 "compare": false, 00:10:53.200 "compare_and_write": false, 00:10:53.200 "abort": true, 00:10:53.200 "seek_hole": false, 00:10:53.200 "seek_data": false, 00:10:53.200 "copy": true, 00:10:53.200 "nvme_iov_md": false 00:10:53.200 }, 00:10:53.200 "memory_domains": [ 00:10:53.200 { 00:10:53.200 "dma_device_id": "system", 00:10:53.200 "dma_device_type": 1 00:10:53.200 }, 00:10:53.200 { 00:10:53.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.200 "dma_device_type": 2 00:10:53.200 } 00:10:53.200 ], 00:10:53.200 "driver_specific": {} 00:10:53.200 } 00:10:53.200 ] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.200 BaseBdev4 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.200 [ 00:10:53.200 { 00:10:53.200 "name": "BaseBdev4", 00:10:53.200 "aliases": [ 00:10:53.200 "d595a29b-b632-44d8-9342-bd2b0db8ea23" 00:10:53.200 ], 00:10:53.200 "product_name": "Malloc disk", 00:10:53.200 "block_size": 512, 00:10:53.200 "num_blocks": 65536, 00:10:53.200 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:53.200 "assigned_rate_limits": { 00:10:53.200 "rw_ios_per_sec": 0, 00:10:53.200 "rw_mbytes_per_sec": 0, 00:10:53.200 "r_mbytes_per_sec": 0, 00:10:53.200 "w_mbytes_per_sec": 0 00:10:53.200 }, 00:10:53.200 "claimed": false, 00:10:53.200 "zoned": false, 00:10:53.200 "supported_io_types": { 00:10:53.200 "read": true, 00:10:53.200 "write": true, 00:10:53.200 "unmap": true, 00:10:53.200 "flush": true, 00:10:53.200 "reset": true, 00:10:53.200 "nvme_admin": false, 00:10:53.200 "nvme_io": false, 00:10:53.200 "nvme_io_md": false, 00:10:53.200 "write_zeroes": true, 00:10:53.200 "zcopy": true, 00:10:53.200 "get_zone_info": false, 00:10:53.200 "zone_management": false, 00:10:53.200 "zone_append": false, 00:10:53.200 "compare": false, 00:10:53.200 "compare_and_write": false, 00:10:53.200 "abort": true, 00:10:53.200 "seek_hole": false, 00:10:53.200 "seek_data": false, 00:10:53.200 "copy": true, 00:10:53.200 "nvme_iov_md": false 00:10:53.200 }, 00:10:53.200 "memory_domains": [ 00:10:53.200 { 00:10:53.200 "dma_device_id": "system", 00:10:53.200 "dma_device_type": 1 00:10:53.200 }, 00:10:53.200 { 00:10:53.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.200 "dma_device_type": 2 00:10:53.200 } 00:10:53.200 ], 00:10:53.200 "driver_specific": {} 00:10:53.200 } 00:10:53.200 ] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.200 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.200 [2024-09-28 08:48:31.189614] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.200 [2024-09-28 08:48:31.189719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.200 [2024-09-28 08:48:31.189765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.200 [2024-09-28 08:48:31.191920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.200 [2024-09-28 08:48:31.192040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.460 "name": "Existed_Raid", 00:10:53.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.460 "strip_size_kb": 64, 00:10:53.460 "state": "configuring", 00:10:53.460 "raid_level": "concat", 00:10:53.460 "superblock": false, 00:10:53.460 "num_base_bdevs": 4, 00:10:53.460 "num_base_bdevs_discovered": 3, 00:10:53.460 "num_base_bdevs_operational": 4, 00:10:53.460 "base_bdevs_list": [ 00:10:53.460 { 00:10:53.460 "name": "BaseBdev1", 00:10:53.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.460 "is_configured": false, 00:10:53.460 "data_offset": 0, 00:10:53.460 "data_size": 0 00:10:53.460 }, 00:10:53.460 { 00:10:53.460 "name": "BaseBdev2", 00:10:53.460 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:53.460 "is_configured": true, 00:10:53.460 "data_offset": 0, 00:10:53.460 "data_size": 65536 00:10:53.460 }, 00:10:53.460 { 00:10:53.460 "name": "BaseBdev3", 00:10:53.460 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:53.460 "is_configured": true, 00:10:53.460 "data_offset": 0, 00:10:53.460 "data_size": 65536 00:10:53.460 }, 00:10:53.460 { 00:10:53.460 "name": "BaseBdev4", 00:10:53.460 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:53.460 "is_configured": true, 00:10:53.460 "data_offset": 0, 00:10:53.460 "data_size": 65536 00:10:53.460 } 00:10:53.460 ] 00:10:53.460 }' 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.460 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.720 [2024-09-28 08:48:31.624850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.720 "name": "Existed_Raid", 00:10:53.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.720 "strip_size_kb": 64, 00:10:53.720 "state": "configuring", 00:10:53.720 "raid_level": "concat", 00:10:53.720 "superblock": false, 00:10:53.720 "num_base_bdevs": 4, 00:10:53.720 "num_base_bdevs_discovered": 2, 00:10:53.720 "num_base_bdevs_operational": 4, 00:10:53.720 "base_bdevs_list": [ 00:10:53.720 { 00:10:53.720 "name": "BaseBdev1", 00:10:53.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.720 "is_configured": false, 00:10:53.720 "data_offset": 0, 00:10:53.720 "data_size": 0 00:10:53.720 }, 00:10:53.720 { 00:10:53.720 "name": null, 00:10:53.720 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:53.720 "is_configured": false, 00:10:53.720 "data_offset": 0, 00:10:53.720 "data_size": 65536 00:10:53.720 }, 00:10:53.720 { 00:10:53.720 "name": "BaseBdev3", 00:10:53.720 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:53.720 "is_configured": true, 00:10:53.720 "data_offset": 0, 00:10:53.720 "data_size": 65536 00:10:53.720 }, 00:10:53.720 { 00:10:53.720 "name": "BaseBdev4", 00:10:53.720 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:53.720 "is_configured": true, 00:10:53.720 "data_offset": 0, 00:10:53.720 "data_size": 65536 00:10:53.720 } 00:10:53.720 ] 00:10:53.720 }' 00:10:53.720 08:48:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.721 08:48:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.290 [2024-09-28 08:48:32.133399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.290 BaseBdev1 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.290 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.290 [ 00:10:54.290 { 00:10:54.290 "name": "BaseBdev1", 00:10:54.290 "aliases": [ 00:10:54.290 "4702af0f-3183-4b21-9550-2242c09d44bb" 00:10:54.290 ], 00:10:54.290 "product_name": "Malloc disk", 00:10:54.290 "block_size": 512, 00:10:54.290 "num_blocks": 65536, 00:10:54.290 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:54.290 "assigned_rate_limits": { 00:10:54.290 "rw_ios_per_sec": 0, 00:10:54.290 "rw_mbytes_per_sec": 0, 00:10:54.290 "r_mbytes_per_sec": 0, 00:10:54.290 "w_mbytes_per_sec": 0 00:10:54.290 }, 00:10:54.290 "claimed": true, 00:10:54.290 "claim_type": "exclusive_write", 00:10:54.290 "zoned": false, 00:10:54.290 "supported_io_types": { 00:10:54.290 "read": true, 00:10:54.290 "write": true, 00:10:54.290 "unmap": true, 00:10:54.290 "flush": true, 00:10:54.290 "reset": true, 00:10:54.290 "nvme_admin": false, 00:10:54.290 "nvme_io": false, 00:10:54.290 "nvme_io_md": false, 00:10:54.290 "write_zeroes": true, 00:10:54.290 "zcopy": true, 00:10:54.290 "get_zone_info": false, 00:10:54.290 "zone_management": false, 00:10:54.290 "zone_append": false, 00:10:54.290 "compare": false, 00:10:54.290 "compare_and_write": false, 00:10:54.290 "abort": true, 00:10:54.290 "seek_hole": false, 00:10:54.290 "seek_data": false, 00:10:54.291 "copy": true, 00:10:54.291 "nvme_iov_md": false 00:10:54.291 }, 00:10:54.291 "memory_domains": [ 00:10:54.291 { 00:10:54.291 "dma_device_id": "system", 00:10:54.291 "dma_device_type": 1 00:10:54.291 }, 00:10:54.291 { 00:10:54.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.291 "dma_device_type": 2 00:10:54.291 } 00:10:54.291 ], 00:10:54.291 "driver_specific": {} 00:10:54.291 } 00:10:54.291 ] 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.291 "name": "Existed_Raid", 00:10:54.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.291 "strip_size_kb": 64, 00:10:54.291 "state": "configuring", 00:10:54.291 "raid_level": "concat", 00:10:54.291 "superblock": false, 00:10:54.291 "num_base_bdevs": 4, 00:10:54.291 "num_base_bdevs_discovered": 3, 00:10:54.291 "num_base_bdevs_operational": 4, 00:10:54.291 "base_bdevs_list": [ 00:10:54.291 { 00:10:54.291 "name": "BaseBdev1", 00:10:54.291 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:54.291 "is_configured": true, 00:10:54.291 "data_offset": 0, 00:10:54.291 "data_size": 65536 00:10:54.291 }, 00:10:54.291 { 00:10:54.291 "name": null, 00:10:54.291 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:54.291 "is_configured": false, 00:10:54.291 "data_offset": 0, 00:10:54.291 "data_size": 65536 00:10:54.291 }, 00:10:54.291 { 00:10:54.291 "name": "BaseBdev3", 00:10:54.291 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:54.291 "is_configured": true, 00:10:54.291 "data_offset": 0, 00:10:54.291 "data_size": 65536 00:10:54.291 }, 00:10:54.291 { 00:10:54.291 "name": "BaseBdev4", 00:10:54.291 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:54.291 "is_configured": true, 00:10:54.291 "data_offset": 0, 00:10:54.291 "data_size": 65536 00:10:54.291 } 00:10:54.291 ] 00:10:54.291 }' 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.291 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.861 [2024-09-28 08:48:32.668527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.861 "name": "Existed_Raid", 00:10:54.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.861 "strip_size_kb": 64, 00:10:54.861 "state": "configuring", 00:10:54.861 "raid_level": "concat", 00:10:54.861 "superblock": false, 00:10:54.861 "num_base_bdevs": 4, 00:10:54.861 "num_base_bdevs_discovered": 2, 00:10:54.861 "num_base_bdevs_operational": 4, 00:10:54.861 "base_bdevs_list": [ 00:10:54.861 { 00:10:54.861 "name": "BaseBdev1", 00:10:54.861 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:54.861 "is_configured": true, 00:10:54.861 "data_offset": 0, 00:10:54.861 "data_size": 65536 00:10:54.861 }, 00:10:54.861 { 00:10:54.861 "name": null, 00:10:54.861 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:54.861 "is_configured": false, 00:10:54.861 "data_offset": 0, 00:10:54.861 "data_size": 65536 00:10:54.861 }, 00:10:54.861 { 00:10:54.861 "name": null, 00:10:54.861 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:54.861 "is_configured": false, 00:10:54.861 "data_offset": 0, 00:10:54.861 "data_size": 65536 00:10:54.861 }, 00:10:54.861 { 00:10:54.861 "name": "BaseBdev4", 00:10:54.861 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:54.861 "is_configured": true, 00:10:54.861 "data_offset": 0, 00:10:54.861 "data_size": 65536 00:10:54.861 } 00:10:54.861 ] 00:10:54.861 }' 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.861 08:48:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.122 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.122 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.122 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.122 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.122 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.381 [2024-09-28 08:48:33.127777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.381 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.381 "name": "Existed_Raid", 00:10:55.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.381 "strip_size_kb": 64, 00:10:55.381 "state": "configuring", 00:10:55.381 "raid_level": "concat", 00:10:55.381 "superblock": false, 00:10:55.381 "num_base_bdevs": 4, 00:10:55.381 "num_base_bdevs_discovered": 3, 00:10:55.381 "num_base_bdevs_operational": 4, 00:10:55.381 "base_bdevs_list": [ 00:10:55.381 { 00:10:55.381 "name": "BaseBdev1", 00:10:55.381 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:55.381 "is_configured": true, 00:10:55.381 "data_offset": 0, 00:10:55.381 "data_size": 65536 00:10:55.381 }, 00:10:55.381 { 00:10:55.381 "name": null, 00:10:55.382 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:55.382 "is_configured": false, 00:10:55.382 "data_offset": 0, 00:10:55.382 "data_size": 65536 00:10:55.382 }, 00:10:55.382 { 00:10:55.382 "name": "BaseBdev3", 00:10:55.382 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:55.382 "is_configured": true, 00:10:55.382 "data_offset": 0, 00:10:55.382 "data_size": 65536 00:10:55.382 }, 00:10:55.382 { 00:10:55.382 "name": "BaseBdev4", 00:10:55.382 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:55.382 "is_configured": true, 00:10:55.382 "data_offset": 0, 00:10:55.382 "data_size": 65536 00:10:55.382 } 00:10:55.382 ] 00:10:55.382 }' 00:10:55.382 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.382 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.641 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.641 [2024-09-28 08:48:33.606942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.901 "name": "Existed_Raid", 00:10:55.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.901 "strip_size_kb": 64, 00:10:55.901 "state": "configuring", 00:10:55.901 "raid_level": "concat", 00:10:55.901 "superblock": false, 00:10:55.901 "num_base_bdevs": 4, 00:10:55.901 "num_base_bdevs_discovered": 2, 00:10:55.901 "num_base_bdevs_operational": 4, 00:10:55.901 "base_bdevs_list": [ 00:10:55.901 { 00:10:55.901 "name": null, 00:10:55.901 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:55.901 "is_configured": false, 00:10:55.901 "data_offset": 0, 00:10:55.901 "data_size": 65536 00:10:55.901 }, 00:10:55.901 { 00:10:55.901 "name": null, 00:10:55.901 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:55.901 "is_configured": false, 00:10:55.901 "data_offset": 0, 00:10:55.901 "data_size": 65536 00:10:55.901 }, 00:10:55.901 { 00:10:55.901 "name": "BaseBdev3", 00:10:55.901 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:55.901 "is_configured": true, 00:10:55.901 "data_offset": 0, 00:10:55.901 "data_size": 65536 00:10:55.901 }, 00:10:55.901 { 00:10:55.901 "name": "BaseBdev4", 00:10:55.901 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:55.901 "is_configured": true, 00:10:55.901 "data_offset": 0, 00:10:55.901 "data_size": 65536 00:10:55.901 } 00:10:55.901 ] 00:10:55.901 }' 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.901 08:48:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.471 [2024-09-28 08:48:34.221274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.471 "name": "Existed_Raid", 00:10:56.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.471 "strip_size_kb": 64, 00:10:56.471 "state": "configuring", 00:10:56.471 "raid_level": "concat", 00:10:56.471 "superblock": false, 00:10:56.471 "num_base_bdevs": 4, 00:10:56.471 "num_base_bdevs_discovered": 3, 00:10:56.471 "num_base_bdevs_operational": 4, 00:10:56.471 "base_bdevs_list": [ 00:10:56.471 { 00:10:56.471 "name": null, 00:10:56.471 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:56.471 "is_configured": false, 00:10:56.471 "data_offset": 0, 00:10:56.471 "data_size": 65536 00:10:56.471 }, 00:10:56.471 { 00:10:56.471 "name": "BaseBdev2", 00:10:56.471 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:56.471 "is_configured": true, 00:10:56.471 "data_offset": 0, 00:10:56.471 "data_size": 65536 00:10:56.471 }, 00:10:56.471 { 00:10:56.471 "name": "BaseBdev3", 00:10:56.471 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:56.471 "is_configured": true, 00:10:56.471 "data_offset": 0, 00:10:56.471 "data_size": 65536 00:10:56.471 }, 00:10:56.471 { 00:10:56.471 "name": "BaseBdev4", 00:10:56.471 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:56.471 "is_configured": true, 00:10:56.471 "data_offset": 0, 00:10:56.471 "data_size": 65536 00:10:56.471 } 00:10:56.471 ] 00:10:56.471 }' 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.471 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.731 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.991 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4702af0f-3183-4b21-9550-2242c09d44bb 00:10:56.991 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.991 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.991 [2024-09-28 08:48:34.774122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.991 [2024-09-28 08:48:34.774239] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.991 [2024-09-28 08:48:34.774252] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:56.991 [2024-09-28 08:48:34.774576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:56.991 [2024-09-28 08:48:34.774763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.991 [2024-09-28 08:48:34.774777] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:56.991 [2024-09-28 08:48:34.775031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.991 NewBaseBdev 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.992 [ 00:10:56.992 { 00:10:56.992 "name": "NewBaseBdev", 00:10:56.992 "aliases": [ 00:10:56.992 "4702af0f-3183-4b21-9550-2242c09d44bb" 00:10:56.992 ], 00:10:56.992 "product_name": "Malloc disk", 00:10:56.992 "block_size": 512, 00:10:56.992 "num_blocks": 65536, 00:10:56.992 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:56.992 "assigned_rate_limits": { 00:10:56.992 "rw_ios_per_sec": 0, 00:10:56.992 "rw_mbytes_per_sec": 0, 00:10:56.992 "r_mbytes_per_sec": 0, 00:10:56.992 "w_mbytes_per_sec": 0 00:10:56.992 }, 00:10:56.992 "claimed": true, 00:10:56.992 "claim_type": "exclusive_write", 00:10:56.992 "zoned": false, 00:10:56.992 "supported_io_types": { 00:10:56.992 "read": true, 00:10:56.992 "write": true, 00:10:56.992 "unmap": true, 00:10:56.992 "flush": true, 00:10:56.992 "reset": true, 00:10:56.992 "nvme_admin": false, 00:10:56.992 "nvme_io": false, 00:10:56.992 "nvme_io_md": false, 00:10:56.992 "write_zeroes": true, 00:10:56.992 "zcopy": true, 00:10:56.992 "get_zone_info": false, 00:10:56.992 "zone_management": false, 00:10:56.992 "zone_append": false, 00:10:56.992 "compare": false, 00:10:56.992 "compare_and_write": false, 00:10:56.992 "abort": true, 00:10:56.992 "seek_hole": false, 00:10:56.992 "seek_data": false, 00:10:56.992 "copy": true, 00:10:56.992 "nvme_iov_md": false 00:10:56.992 }, 00:10:56.992 "memory_domains": [ 00:10:56.992 { 00:10:56.992 "dma_device_id": "system", 00:10:56.992 "dma_device_type": 1 00:10:56.992 }, 00:10:56.992 { 00:10:56.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.992 "dma_device_type": 2 00:10:56.992 } 00:10:56.992 ], 00:10:56.992 "driver_specific": {} 00:10:56.992 } 00:10:56.992 ] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.992 "name": "Existed_Raid", 00:10:56.992 "uuid": "3c5e1214-507e-4d51-bd35-bb8713a4a9b4", 00:10:56.992 "strip_size_kb": 64, 00:10:56.992 "state": "online", 00:10:56.992 "raid_level": "concat", 00:10:56.992 "superblock": false, 00:10:56.992 "num_base_bdevs": 4, 00:10:56.992 "num_base_bdevs_discovered": 4, 00:10:56.992 "num_base_bdevs_operational": 4, 00:10:56.992 "base_bdevs_list": [ 00:10:56.992 { 00:10:56.992 "name": "NewBaseBdev", 00:10:56.992 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:56.992 "is_configured": true, 00:10:56.992 "data_offset": 0, 00:10:56.992 "data_size": 65536 00:10:56.992 }, 00:10:56.992 { 00:10:56.992 "name": "BaseBdev2", 00:10:56.992 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:56.992 "is_configured": true, 00:10:56.992 "data_offset": 0, 00:10:56.992 "data_size": 65536 00:10:56.992 }, 00:10:56.992 { 00:10:56.992 "name": "BaseBdev3", 00:10:56.992 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:56.992 "is_configured": true, 00:10:56.992 "data_offset": 0, 00:10:56.992 "data_size": 65536 00:10:56.992 }, 00:10:56.992 { 00:10:56.992 "name": "BaseBdev4", 00:10:56.992 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:56.992 "is_configured": true, 00:10:56.992 "data_offset": 0, 00:10:56.992 "data_size": 65536 00:10:56.992 } 00:10:56.992 ] 00:10:56.992 }' 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.992 08:48:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.562 [2024-09-28 08:48:35.281705] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.562 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.562 "name": "Existed_Raid", 00:10:57.562 "aliases": [ 00:10:57.562 "3c5e1214-507e-4d51-bd35-bb8713a4a9b4" 00:10:57.562 ], 00:10:57.562 "product_name": "Raid Volume", 00:10:57.562 "block_size": 512, 00:10:57.562 "num_blocks": 262144, 00:10:57.562 "uuid": "3c5e1214-507e-4d51-bd35-bb8713a4a9b4", 00:10:57.562 "assigned_rate_limits": { 00:10:57.562 "rw_ios_per_sec": 0, 00:10:57.562 "rw_mbytes_per_sec": 0, 00:10:57.562 "r_mbytes_per_sec": 0, 00:10:57.562 "w_mbytes_per_sec": 0 00:10:57.562 }, 00:10:57.562 "claimed": false, 00:10:57.562 "zoned": false, 00:10:57.562 "supported_io_types": { 00:10:57.562 "read": true, 00:10:57.562 "write": true, 00:10:57.562 "unmap": true, 00:10:57.562 "flush": true, 00:10:57.562 "reset": true, 00:10:57.562 "nvme_admin": false, 00:10:57.562 "nvme_io": false, 00:10:57.562 "nvme_io_md": false, 00:10:57.562 "write_zeroes": true, 00:10:57.562 "zcopy": false, 00:10:57.562 "get_zone_info": false, 00:10:57.562 "zone_management": false, 00:10:57.562 "zone_append": false, 00:10:57.562 "compare": false, 00:10:57.562 "compare_and_write": false, 00:10:57.562 "abort": false, 00:10:57.563 "seek_hole": false, 00:10:57.563 "seek_data": false, 00:10:57.563 "copy": false, 00:10:57.563 "nvme_iov_md": false 00:10:57.563 }, 00:10:57.563 "memory_domains": [ 00:10:57.563 { 00:10:57.563 "dma_device_id": "system", 00:10:57.563 "dma_device_type": 1 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.563 "dma_device_type": 2 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "system", 00:10:57.563 "dma_device_type": 1 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.563 "dma_device_type": 2 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "system", 00:10:57.563 "dma_device_type": 1 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.563 "dma_device_type": 2 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "system", 00:10:57.563 "dma_device_type": 1 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.563 "dma_device_type": 2 00:10:57.563 } 00:10:57.563 ], 00:10:57.563 "driver_specific": { 00:10:57.563 "raid": { 00:10:57.563 "uuid": "3c5e1214-507e-4d51-bd35-bb8713a4a9b4", 00:10:57.563 "strip_size_kb": 64, 00:10:57.563 "state": "online", 00:10:57.563 "raid_level": "concat", 00:10:57.563 "superblock": false, 00:10:57.563 "num_base_bdevs": 4, 00:10:57.563 "num_base_bdevs_discovered": 4, 00:10:57.563 "num_base_bdevs_operational": 4, 00:10:57.563 "base_bdevs_list": [ 00:10:57.563 { 00:10:57.563 "name": "NewBaseBdev", 00:10:57.563 "uuid": "4702af0f-3183-4b21-9550-2242c09d44bb", 00:10:57.563 "is_configured": true, 00:10:57.563 "data_offset": 0, 00:10:57.563 "data_size": 65536 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "name": "BaseBdev2", 00:10:57.563 "uuid": "6db154ee-0d00-404e-a9f9-4d044bdae537", 00:10:57.563 "is_configured": true, 00:10:57.563 "data_offset": 0, 00:10:57.563 "data_size": 65536 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "name": "BaseBdev3", 00:10:57.563 "uuid": "4af10eda-d371-440c-ba9f-f0bf64c5eca9", 00:10:57.563 "is_configured": true, 00:10:57.563 "data_offset": 0, 00:10:57.563 "data_size": 65536 00:10:57.563 }, 00:10:57.563 { 00:10:57.563 "name": "BaseBdev4", 00:10:57.563 "uuid": "d595a29b-b632-44d8-9342-bd2b0db8ea23", 00:10:57.563 "is_configured": true, 00:10:57.563 "data_offset": 0, 00:10:57.563 "data_size": 65536 00:10:57.563 } 00:10:57.563 ] 00:10:57.563 } 00:10:57.563 } 00:10:57.563 }' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.563 BaseBdev2 00:10:57.563 BaseBdev3 00:10:57.563 BaseBdev4' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.823 [2024-09-28 08:48:35.596748] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.823 [2024-09-28 08:48:35.596778] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.823 [2024-09-28 08:48:35.596862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.823 [2024-09-28 08:48:35.596934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.823 [2024-09-28 08:48:35.596945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71285 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71285 ']' 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71285 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71285 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71285' 00:10:57.823 killing process with pid 71285 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71285 00:10:57.823 [2024-09-28 08:48:35.646405] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.823 08:48:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71285 00:10:58.084 [2024-09-28 08:48:36.060982] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.548 00:10:59.548 real 0m11.749s 00:10:59.548 user 0m18.282s 00:10:59.548 sys 0m2.186s 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.548 ************************************ 00:10:59.548 END TEST raid_state_function_test 00:10:59.548 ************************************ 00:10:59.548 08:48:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:59.548 08:48:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:59.548 08:48:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.548 08:48:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.548 ************************************ 00:10:59.548 START TEST raid_state_function_test_sb 00:10:59.548 ************************************ 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:59.548 Process raid pid: 71961 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71961 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71961' 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71961 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71961 ']' 00:10:59.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.548 08:48:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.806 [2024-09-28 08:48:37.567095] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:59.806 [2024-09-28 08:48:37.567372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.806 [2024-09-28 08:48:37.738471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.064 [2024-09-28 08:48:37.986657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.323 [2024-09-28 08:48:38.215844] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.323 [2024-09-28 08:48:38.215879] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.580 [2024-09-28 08:48:38.388590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.580 [2024-09-28 08:48:38.388647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.580 [2024-09-28 08:48:38.388669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.580 [2024-09-28 08:48:38.388694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.580 [2024-09-28 08:48:38.388701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.580 [2024-09-28 08:48:38.388710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.580 [2024-09-28 08:48:38.388716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.580 [2024-09-28 08:48:38.388726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.580 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.581 "name": "Existed_Raid", 00:11:00.581 "uuid": "ee353aeb-6b38-4d41-900a-d766d654b2d5", 00:11:00.581 "strip_size_kb": 64, 00:11:00.581 "state": "configuring", 00:11:00.581 "raid_level": "concat", 00:11:00.581 "superblock": true, 00:11:00.581 "num_base_bdevs": 4, 00:11:00.581 "num_base_bdevs_discovered": 0, 00:11:00.581 "num_base_bdevs_operational": 4, 00:11:00.581 "base_bdevs_list": [ 00:11:00.581 { 00:11:00.581 "name": "BaseBdev1", 00:11:00.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 0 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": "BaseBdev2", 00:11:00.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 0 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": "BaseBdev3", 00:11:00.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 0 00:11:00.581 }, 00:11:00.581 { 00:11:00.581 "name": "BaseBdev4", 00:11:00.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.581 "is_configured": false, 00:11:00.581 "data_offset": 0, 00:11:00.581 "data_size": 0 00:11:00.581 } 00:11:00.581 ] 00:11:00.581 }' 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.581 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 [2024-09-28 08:48:38.807769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.839 [2024-09-28 08:48:38.807875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.839 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 [2024-09-28 08:48:38.815799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.840 [2024-09-28 08:48:38.815880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.840 [2024-09-28 08:48:38.815908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.840 [2024-09-28 08:48:38.815932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.840 [2024-09-28 08:48:38.815951] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.840 [2024-09-28 08:48:38.815972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.840 [2024-09-28 08:48:38.815991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.840 [2024-09-28 08:48:38.816045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.840 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.840 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.840 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.840 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.099 [2024-09-28 08:48:38.899860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.099 BaseBdev1 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.099 [ 00:11:01.099 { 00:11:01.099 "name": "BaseBdev1", 00:11:01.099 "aliases": [ 00:11:01.099 "afde7516-3646-4d5b-ae43-f872f967406e" 00:11:01.099 ], 00:11:01.099 "product_name": "Malloc disk", 00:11:01.099 "block_size": 512, 00:11:01.099 "num_blocks": 65536, 00:11:01.099 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:01.099 "assigned_rate_limits": { 00:11:01.099 "rw_ios_per_sec": 0, 00:11:01.099 "rw_mbytes_per_sec": 0, 00:11:01.099 "r_mbytes_per_sec": 0, 00:11:01.099 "w_mbytes_per_sec": 0 00:11:01.099 }, 00:11:01.099 "claimed": true, 00:11:01.099 "claim_type": "exclusive_write", 00:11:01.099 "zoned": false, 00:11:01.099 "supported_io_types": { 00:11:01.099 "read": true, 00:11:01.099 "write": true, 00:11:01.099 "unmap": true, 00:11:01.099 "flush": true, 00:11:01.099 "reset": true, 00:11:01.099 "nvme_admin": false, 00:11:01.099 "nvme_io": false, 00:11:01.099 "nvme_io_md": false, 00:11:01.099 "write_zeroes": true, 00:11:01.099 "zcopy": true, 00:11:01.099 "get_zone_info": false, 00:11:01.099 "zone_management": false, 00:11:01.099 "zone_append": false, 00:11:01.099 "compare": false, 00:11:01.099 "compare_and_write": false, 00:11:01.099 "abort": true, 00:11:01.099 "seek_hole": false, 00:11:01.099 "seek_data": false, 00:11:01.099 "copy": true, 00:11:01.099 "nvme_iov_md": false 00:11:01.099 }, 00:11:01.099 "memory_domains": [ 00:11:01.099 { 00:11:01.099 "dma_device_id": "system", 00:11:01.099 "dma_device_type": 1 00:11:01.099 }, 00:11:01.099 { 00:11:01.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.099 "dma_device_type": 2 00:11:01.099 } 00:11:01.099 ], 00:11:01.099 "driver_specific": {} 00:11:01.099 } 00:11:01.099 ] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.099 "name": "Existed_Raid", 00:11:01.099 "uuid": "78028950-4a93-4a0b-9f65-8e282a77dd07", 00:11:01.099 "strip_size_kb": 64, 00:11:01.099 "state": "configuring", 00:11:01.099 "raid_level": "concat", 00:11:01.099 "superblock": true, 00:11:01.099 "num_base_bdevs": 4, 00:11:01.099 "num_base_bdevs_discovered": 1, 00:11:01.099 "num_base_bdevs_operational": 4, 00:11:01.099 "base_bdevs_list": [ 00:11:01.099 { 00:11:01.099 "name": "BaseBdev1", 00:11:01.099 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:01.099 "is_configured": true, 00:11:01.099 "data_offset": 2048, 00:11:01.099 "data_size": 63488 00:11:01.099 }, 00:11:01.099 { 00:11:01.099 "name": "BaseBdev2", 00:11:01.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.099 "is_configured": false, 00:11:01.099 "data_offset": 0, 00:11:01.099 "data_size": 0 00:11:01.099 }, 00:11:01.099 { 00:11:01.099 "name": "BaseBdev3", 00:11:01.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.099 "is_configured": false, 00:11:01.099 "data_offset": 0, 00:11:01.099 "data_size": 0 00:11:01.099 }, 00:11:01.099 { 00:11:01.099 "name": "BaseBdev4", 00:11:01.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.099 "is_configured": false, 00:11:01.099 "data_offset": 0, 00:11:01.099 "data_size": 0 00:11:01.099 } 00:11:01.099 ] 00:11:01.099 }' 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.099 08:48:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.668 [2024-09-28 08:48:39.367098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.668 [2024-09-28 08:48:39.367215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.668 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.668 [2024-09-28 08:48:39.375165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.668 [2024-09-28 08:48:39.377303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.669 [2024-09-28 08:48:39.377398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.669 [2024-09-28 08:48:39.377413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.669 [2024-09-28 08:48:39.377425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.669 [2024-09-28 08:48:39.377432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.669 [2024-09-28 08:48:39.377441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.669 "name": "Existed_Raid", 00:11:01.669 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:01.669 "strip_size_kb": 64, 00:11:01.669 "state": "configuring", 00:11:01.669 "raid_level": "concat", 00:11:01.669 "superblock": true, 00:11:01.669 "num_base_bdevs": 4, 00:11:01.669 "num_base_bdevs_discovered": 1, 00:11:01.669 "num_base_bdevs_operational": 4, 00:11:01.669 "base_bdevs_list": [ 00:11:01.669 { 00:11:01.669 "name": "BaseBdev1", 00:11:01.669 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:01.669 "is_configured": true, 00:11:01.669 "data_offset": 2048, 00:11:01.669 "data_size": 63488 00:11:01.669 }, 00:11:01.669 { 00:11:01.669 "name": "BaseBdev2", 00:11:01.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.669 "is_configured": false, 00:11:01.669 "data_offset": 0, 00:11:01.669 "data_size": 0 00:11:01.669 }, 00:11:01.669 { 00:11:01.669 "name": "BaseBdev3", 00:11:01.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.669 "is_configured": false, 00:11:01.669 "data_offset": 0, 00:11:01.669 "data_size": 0 00:11:01.669 }, 00:11:01.669 { 00:11:01.669 "name": "BaseBdev4", 00:11:01.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.669 "is_configured": false, 00:11:01.669 "data_offset": 0, 00:11:01.669 "data_size": 0 00:11:01.669 } 00:11:01.669 ] 00:11:01.669 }' 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.669 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.928 [2024-09-28 08:48:39.834530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.928 BaseBdev2 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.928 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.929 [ 00:11:01.929 { 00:11:01.929 "name": "BaseBdev2", 00:11:01.929 "aliases": [ 00:11:01.929 "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5" 00:11:01.929 ], 00:11:01.929 "product_name": "Malloc disk", 00:11:01.929 "block_size": 512, 00:11:01.929 "num_blocks": 65536, 00:11:01.929 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:01.929 "assigned_rate_limits": { 00:11:01.929 "rw_ios_per_sec": 0, 00:11:01.929 "rw_mbytes_per_sec": 0, 00:11:01.929 "r_mbytes_per_sec": 0, 00:11:01.929 "w_mbytes_per_sec": 0 00:11:01.929 }, 00:11:01.929 "claimed": true, 00:11:01.929 "claim_type": "exclusive_write", 00:11:01.929 "zoned": false, 00:11:01.929 "supported_io_types": { 00:11:01.929 "read": true, 00:11:01.929 "write": true, 00:11:01.929 "unmap": true, 00:11:01.929 "flush": true, 00:11:01.929 "reset": true, 00:11:01.929 "nvme_admin": false, 00:11:01.929 "nvme_io": false, 00:11:01.929 "nvme_io_md": false, 00:11:01.929 "write_zeroes": true, 00:11:01.929 "zcopy": true, 00:11:01.929 "get_zone_info": false, 00:11:01.929 "zone_management": false, 00:11:01.929 "zone_append": false, 00:11:01.929 "compare": false, 00:11:01.929 "compare_and_write": false, 00:11:01.929 "abort": true, 00:11:01.929 "seek_hole": false, 00:11:01.929 "seek_data": false, 00:11:01.929 "copy": true, 00:11:01.929 "nvme_iov_md": false 00:11:01.929 }, 00:11:01.929 "memory_domains": [ 00:11:01.929 { 00:11:01.929 "dma_device_id": "system", 00:11:01.929 "dma_device_type": 1 00:11:01.929 }, 00:11:01.929 { 00:11:01.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.929 "dma_device_type": 2 00:11:01.929 } 00:11:01.929 ], 00:11:01.929 "driver_specific": {} 00:11:01.929 } 00:11:01.929 ] 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.929 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.188 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.188 "name": "Existed_Raid", 00:11:02.188 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:02.188 "strip_size_kb": 64, 00:11:02.188 "state": "configuring", 00:11:02.188 "raid_level": "concat", 00:11:02.188 "superblock": true, 00:11:02.188 "num_base_bdevs": 4, 00:11:02.188 "num_base_bdevs_discovered": 2, 00:11:02.188 "num_base_bdevs_operational": 4, 00:11:02.188 "base_bdevs_list": [ 00:11:02.188 { 00:11:02.188 "name": "BaseBdev1", 00:11:02.188 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:02.188 "is_configured": true, 00:11:02.188 "data_offset": 2048, 00:11:02.188 "data_size": 63488 00:11:02.188 }, 00:11:02.188 { 00:11:02.188 "name": "BaseBdev2", 00:11:02.188 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:02.188 "is_configured": true, 00:11:02.188 "data_offset": 2048, 00:11:02.188 "data_size": 63488 00:11:02.188 }, 00:11:02.188 { 00:11:02.188 "name": "BaseBdev3", 00:11:02.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.188 "is_configured": false, 00:11:02.188 "data_offset": 0, 00:11:02.188 "data_size": 0 00:11:02.188 }, 00:11:02.188 { 00:11:02.188 "name": "BaseBdev4", 00:11:02.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.188 "is_configured": false, 00:11:02.188 "data_offset": 0, 00:11:02.188 "data_size": 0 00:11:02.188 } 00:11:02.188 ] 00:11:02.188 }' 00:11:02.188 08:48:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.188 08:48:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.448 [2024-09-28 08:48:40.333783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.448 BaseBdev3 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.448 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.448 [ 00:11:02.448 { 00:11:02.448 "name": "BaseBdev3", 00:11:02.448 "aliases": [ 00:11:02.448 "f2286d40-f08b-415e-87d5-4b9fb576ff37" 00:11:02.448 ], 00:11:02.448 "product_name": "Malloc disk", 00:11:02.448 "block_size": 512, 00:11:02.448 "num_blocks": 65536, 00:11:02.448 "uuid": "f2286d40-f08b-415e-87d5-4b9fb576ff37", 00:11:02.448 "assigned_rate_limits": { 00:11:02.448 "rw_ios_per_sec": 0, 00:11:02.448 "rw_mbytes_per_sec": 0, 00:11:02.448 "r_mbytes_per_sec": 0, 00:11:02.448 "w_mbytes_per_sec": 0 00:11:02.448 }, 00:11:02.448 "claimed": true, 00:11:02.448 "claim_type": "exclusive_write", 00:11:02.448 "zoned": false, 00:11:02.448 "supported_io_types": { 00:11:02.448 "read": true, 00:11:02.449 "write": true, 00:11:02.449 "unmap": true, 00:11:02.449 "flush": true, 00:11:02.449 "reset": true, 00:11:02.449 "nvme_admin": false, 00:11:02.449 "nvme_io": false, 00:11:02.449 "nvme_io_md": false, 00:11:02.449 "write_zeroes": true, 00:11:02.449 "zcopy": true, 00:11:02.449 "get_zone_info": false, 00:11:02.449 "zone_management": false, 00:11:02.449 "zone_append": false, 00:11:02.449 "compare": false, 00:11:02.449 "compare_and_write": false, 00:11:02.449 "abort": true, 00:11:02.449 "seek_hole": false, 00:11:02.449 "seek_data": false, 00:11:02.449 "copy": true, 00:11:02.449 "nvme_iov_md": false 00:11:02.449 }, 00:11:02.449 "memory_domains": [ 00:11:02.449 { 00:11:02.449 "dma_device_id": "system", 00:11:02.449 "dma_device_type": 1 00:11:02.449 }, 00:11:02.449 { 00:11:02.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.449 "dma_device_type": 2 00:11:02.449 } 00:11:02.449 ], 00:11:02.449 "driver_specific": {} 00:11:02.449 } 00:11:02.449 ] 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.449 "name": "Existed_Raid", 00:11:02.449 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:02.449 "strip_size_kb": 64, 00:11:02.449 "state": "configuring", 00:11:02.449 "raid_level": "concat", 00:11:02.449 "superblock": true, 00:11:02.449 "num_base_bdevs": 4, 00:11:02.449 "num_base_bdevs_discovered": 3, 00:11:02.449 "num_base_bdevs_operational": 4, 00:11:02.449 "base_bdevs_list": [ 00:11:02.449 { 00:11:02.449 "name": "BaseBdev1", 00:11:02.449 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:02.449 "is_configured": true, 00:11:02.449 "data_offset": 2048, 00:11:02.449 "data_size": 63488 00:11:02.449 }, 00:11:02.449 { 00:11:02.449 "name": "BaseBdev2", 00:11:02.449 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:02.449 "is_configured": true, 00:11:02.449 "data_offset": 2048, 00:11:02.449 "data_size": 63488 00:11:02.449 }, 00:11:02.449 { 00:11:02.449 "name": "BaseBdev3", 00:11:02.449 "uuid": "f2286d40-f08b-415e-87d5-4b9fb576ff37", 00:11:02.449 "is_configured": true, 00:11:02.449 "data_offset": 2048, 00:11:02.449 "data_size": 63488 00:11:02.449 }, 00:11:02.449 { 00:11:02.449 "name": "BaseBdev4", 00:11:02.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.449 "is_configured": false, 00:11:02.449 "data_offset": 0, 00:11:02.449 "data_size": 0 00:11:02.449 } 00:11:02.449 ] 00:11:02.449 }' 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.449 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.016 [2024-09-28 08:48:40.873284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.016 [2024-09-28 08:48:40.873563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.016 [2024-09-28 08:48:40.873584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.016 [2024-09-28 08:48:40.874017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.016 [2024-09-28 08:48:40.874235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.016 [2024-09-28 08:48:40.874287] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:03.016 [2024-09-28 08:48:40.874488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.016 BaseBdev4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.016 [ 00:11:03.016 { 00:11:03.016 "name": "BaseBdev4", 00:11:03.016 "aliases": [ 00:11:03.016 "f239baf6-466a-473c-b02e-92756fd80678" 00:11:03.016 ], 00:11:03.016 "product_name": "Malloc disk", 00:11:03.016 "block_size": 512, 00:11:03.016 "num_blocks": 65536, 00:11:03.016 "uuid": "f239baf6-466a-473c-b02e-92756fd80678", 00:11:03.016 "assigned_rate_limits": { 00:11:03.016 "rw_ios_per_sec": 0, 00:11:03.016 "rw_mbytes_per_sec": 0, 00:11:03.016 "r_mbytes_per_sec": 0, 00:11:03.016 "w_mbytes_per_sec": 0 00:11:03.016 }, 00:11:03.016 "claimed": true, 00:11:03.016 "claim_type": "exclusive_write", 00:11:03.016 "zoned": false, 00:11:03.016 "supported_io_types": { 00:11:03.016 "read": true, 00:11:03.016 "write": true, 00:11:03.016 "unmap": true, 00:11:03.016 "flush": true, 00:11:03.016 "reset": true, 00:11:03.016 "nvme_admin": false, 00:11:03.016 "nvme_io": false, 00:11:03.016 "nvme_io_md": false, 00:11:03.016 "write_zeroes": true, 00:11:03.016 "zcopy": true, 00:11:03.016 "get_zone_info": false, 00:11:03.016 "zone_management": false, 00:11:03.016 "zone_append": false, 00:11:03.016 "compare": false, 00:11:03.016 "compare_and_write": false, 00:11:03.016 "abort": true, 00:11:03.016 "seek_hole": false, 00:11:03.016 "seek_data": false, 00:11:03.016 "copy": true, 00:11:03.016 "nvme_iov_md": false 00:11:03.016 }, 00:11:03.016 "memory_domains": [ 00:11:03.016 { 00:11:03.016 "dma_device_id": "system", 00:11:03.016 "dma_device_type": 1 00:11:03.016 }, 00:11:03.016 { 00:11:03.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.016 "dma_device_type": 2 00:11:03.016 } 00:11:03.016 ], 00:11:03.016 "driver_specific": {} 00:11:03.016 } 00:11:03.016 ] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.016 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.016 "name": "Existed_Raid", 00:11:03.016 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:03.016 "strip_size_kb": 64, 00:11:03.016 "state": "online", 00:11:03.017 "raid_level": "concat", 00:11:03.017 "superblock": true, 00:11:03.017 "num_base_bdevs": 4, 00:11:03.017 "num_base_bdevs_discovered": 4, 00:11:03.017 "num_base_bdevs_operational": 4, 00:11:03.017 "base_bdevs_list": [ 00:11:03.017 { 00:11:03.017 "name": "BaseBdev1", 00:11:03.017 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:03.017 "is_configured": true, 00:11:03.017 "data_offset": 2048, 00:11:03.017 "data_size": 63488 00:11:03.017 }, 00:11:03.017 { 00:11:03.017 "name": "BaseBdev2", 00:11:03.017 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:03.017 "is_configured": true, 00:11:03.017 "data_offset": 2048, 00:11:03.017 "data_size": 63488 00:11:03.017 }, 00:11:03.017 { 00:11:03.017 "name": "BaseBdev3", 00:11:03.017 "uuid": "f2286d40-f08b-415e-87d5-4b9fb576ff37", 00:11:03.017 "is_configured": true, 00:11:03.017 "data_offset": 2048, 00:11:03.017 "data_size": 63488 00:11:03.017 }, 00:11:03.017 { 00:11:03.017 "name": "BaseBdev4", 00:11:03.017 "uuid": "f239baf6-466a-473c-b02e-92756fd80678", 00:11:03.017 "is_configured": true, 00:11:03.017 "data_offset": 2048, 00:11:03.017 "data_size": 63488 00:11:03.017 } 00:11:03.017 ] 00:11:03.017 }' 00:11:03.017 08:48:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.017 08:48:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 [2024-09-28 08:48:41.412726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.585 "name": "Existed_Raid", 00:11:03.585 "aliases": [ 00:11:03.585 "73014b7f-a381-4313-8c93-09c25dc035c0" 00:11:03.585 ], 00:11:03.585 "product_name": "Raid Volume", 00:11:03.585 "block_size": 512, 00:11:03.585 "num_blocks": 253952, 00:11:03.585 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:03.585 "assigned_rate_limits": { 00:11:03.585 "rw_ios_per_sec": 0, 00:11:03.585 "rw_mbytes_per_sec": 0, 00:11:03.585 "r_mbytes_per_sec": 0, 00:11:03.585 "w_mbytes_per_sec": 0 00:11:03.585 }, 00:11:03.585 "claimed": false, 00:11:03.585 "zoned": false, 00:11:03.585 "supported_io_types": { 00:11:03.585 "read": true, 00:11:03.585 "write": true, 00:11:03.585 "unmap": true, 00:11:03.585 "flush": true, 00:11:03.585 "reset": true, 00:11:03.585 "nvme_admin": false, 00:11:03.585 "nvme_io": false, 00:11:03.585 "nvme_io_md": false, 00:11:03.585 "write_zeroes": true, 00:11:03.585 "zcopy": false, 00:11:03.585 "get_zone_info": false, 00:11:03.585 "zone_management": false, 00:11:03.585 "zone_append": false, 00:11:03.585 "compare": false, 00:11:03.585 "compare_and_write": false, 00:11:03.585 "abort": false, 00:11:03.585 "seek_hole": false, 00:11:03.585 "seek_data": false, 00:11:03.585 "copy": false, 00:11:03.585 "nvme_iov_md": false 00:11:03.585 }, 00:11:03.585 "memory_domains": [ 00:11:03.585 { 00:11:03.585 "dma_device_id": "system", 00:11:03.585 "dma_device_type": 1 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.585 "dma_device_type": 2 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "system", 00:11:03.585 "dma_device_type": 1 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.585 "dma_device_type": 2 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "system", 00:11:03.585 "dma_device_type": 1 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.585 "dma_device_type": 2 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "system", 00:11:03.585 "dma_device_type": 1 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.585 "dma_device_type": 2 00:11:03.585 } 00:11:03.585 ], 00:11:03.585 "driver_specific": { 00:11:03.585 "raid": { 00:11:03.585 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:03.585 "strip_size_kb": 64, 00:11:03.585 "state": "online", 00:11:03.585 "raid_level": "concat", 00:11:03.585 "superblock": true, 00:11:03.585 "num_base_bdevs": 4, 00:11:03.585 "num_base_bdevs_discovered": 4, 00:11:03.585 "num_base_bdevs_operational": 4, 00:11:03.585 "base_bdevs_list": [ 00:11:03.585 { 00:11:03.585 "name": "BaseBdev1", 00:11:03.585 "uuid": "afde7516-3646-4d5b-ae43-f872f967406e", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "name": "BaseBdev2", 00:11:03.585 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "name": "BaseBdev3", 00:11:03.585 "uuid": "f2286d40-f08b-415e-87d5-4b9fb576ff37", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "name": "BaseBdev4", 00:11:03.585 "uuid": "f239baf6-466a-473c-b02e-92756fd80678", 00:11:03.585 "is_configured": true, 00:11:03.585 "data_offset": 2048, 00:11:03.585 "data_size": 63488 00:11:03.585 } 00:11:03.585 ] 00:11:03.585 } 00:11:03.585 } 00:11:03.585 }' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:03.585 BaseBdev2 00:11:03.585 BaseBdev3 00:11:03.585 BaseBdev4' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.585 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.586 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 [2024-09-28 08:48:41.711898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.845 [2024-09-28 08:48:41.711931] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.845 [2024-09-28 08:48:41.711981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.104 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.104 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.104 "name": "Existed_Raid", 00:11:04.104 "uuid": "73014b7f-a381-4313-8c93-09c25dc035c0", 00:11:04.104 "strip_size_kb": 64, 00:11:04.104 "state": "offline", 00:11:04.104 "raid_level": "concat", 00:11:04.104 "superblock": true, 00:11:04.105 "num_base_bdevs": 4, 00:11:04.105 "num_base_bdevs_discovered": 3, 00:11:04.105 "num_base_bdevs_operational": 3, 00:11:04.105 "base_bdevs_list": [ 00:11:04.105 { 00:11:04.105 "name": null, 00:11:04.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.105 "is_configured": false, 00:11:04.105 "data_offset": 0, 00:11:04.105 "data_size": 63488 00:11:04.105 }, 00:11:04.105 { 00:11:04.105 "name": "BaseBdev2", 00:11:04.105 "uuid": "99cf5ca0-32ec-4981-8d32-0d7ebefd6cb5", 00:11:04.105 "is_configured": true, 00:11:04.105 "data_offset": 2048, 00:11:04.105 "data_size": 63488 00:11:04.105 }, 00:11:04.105 { 00:11:04.105 "name": "BaseBdev3", 00:11:04.105 "uuid": "f2286d40-f08b-415e-87d5-4b9fb576ff37", 00:11:04.105 "is_configured": true, 00:11:04.105 "data_offset": 2048, 00:11:04.105 "data_size": 63488 00:11:04.105 }, 00:11:04.105 { 00:11:04.105 "name": "BaseBdev4", 00:11:04.105 "uuid": "f239baf6-466a-473c-b02e-92756fd80678", 00:11:04.105 "is_configured": true, 00:11:04.105 "data_offset": 2048, 00:11:04.105 "data_size": 63488 00:11:04.105 } 00:11:04.105 ] 00:11:04.105 }' 00:11:04.105 08:48:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.105 08:48:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.363 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.363 [2024-09-28 08:48:42.337906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.622 [2024-09-28 08:48:42.492119] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.622 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.881 [2024-09-28 08:48:42.636430] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:04.881 [2024-09-28 08:48:42.636486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.881 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.882 BaseBdev2 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.882 [ 00:11:04.882 { 00:11:04.882 "name": "BaseBdev2", 00:11:04.882 "aliases": [ 00:11:04.882 "a60db030-b1ee-4aea-b270-fd5c112e8682" 00:11:04.882 ], 00:11:04.882 "product_name": "Malloc disk", 00:11:04.882 "block_size": 512, 00:11:04.882 "num_blocks": 65536, 00:11:04.882 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:04.882 "assigned_rate_limits": { 00:11:04.882 "rw_ios_per_sec": 0, 00:11:04.882 "rw_mbytes_per_sec": 0, 00:11:04.882 "r_mbytes_per_sec": 0, 00:11:04.882 "w_mbytes_per_sec": 0 00:11:04.882 }, 00:11:04.882 "claimed": false, 00:11:04.882 "zoned": false, 00:11:04.882 "supported_io_types": { 00:11:04.882 "read": true, 00:11:04.882 "write": true, 00:11:04.882 "unmap": true, 00:11:04.882 "flush": true, 00:11:04.882 "reset": true, 00:11:04.882 "nvme_admin": false, 00:11:04.882 "nvme_io": false, 00:11:04.882 "nvme_io_md": false, 00:11:04.882 "write_zeroes": true, 00:11:04.882 "zcopy": true, 00:11:04.882 "get_zone_info": false, 00:11:04.882 "zone_management": false, 00:11:04.882 "zone_append": false, 00:11:04.882 "compare": false, 00:11:04.882 "compare_and_write": false, 00:11:04.882 "abort": true, 00:11:04.882 "seek_hole": false, 00:11:04.882 "seek_data": false, 00:11:04.882 "copy": true, 00:11:04.882 "nvme_iov_md": false 00:11:04.882 }, 00:11:04.882 "memory_domains": [ 00:11:04.882 { 00:11:04.882 "dma_device_id": "system", 00:11:04.882 "dma_device_type": 1 00:11:04.882 }, 00:11:04.882 { 00:11:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.882 "dma_device_type": 2 00:11:04.882 } 00:11:04.882 ], 00:11:04.882 "driver_specific": {} 00:11:04.882 } 00:11:04.882 ] 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.882 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.141 BaseBdev3 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.141 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.141 [ 00:11:05.141 { 00:11:05.141 "name": "BaseBdev3", 00:11:05.141 "aliases": [ 00:11:05.141 "6485ca1f-1f42-4af1-aca2-d87bc5c95d96" 00:11:05.141 ], 00:11:05.141 "product_name": "Malloc disk", 00:11:05.141 "block_size": 512, 00:11:05.141 "num_blocks": 65536, 00:11:05.141 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:05.141 "assigned_rate_limits": { 00:11:05.142 "rw_ios_per_sec": 0, 00:11:05.142 "rw_mbytes_per_sec": 0, 00:11:05.142 "r_mbytes_per_sec": 0, 00:11:05.142 "w_mbytes_per_sec": 0 00:11:05.142 }, 00:11:05.142 "claimed": false, 00:11:05.142 "zoned": false, 00:11:05.142 "supported_io_types": { 00:11:05.142 "read": true, 00:11:05.142 "write": true, 00:11:05.142 "unmap": true, 00:11:05.142 "flush": true, 00:11:05.142 "reset": true, 00:11:05.142 "nvme_admin": false, 00:11:05.142 "nvme_io": false, 00:11:05.142 "nvme_io_md": false, 00:11:05.142 "write_zeroes": true, 00:11:05.142 "zcopy": true, 00:11:05.142 "get_zone_info": false, 00:11:05.142 "zone_management": false, 00:11:05.142 "zone_append": false, 00:11:05.142 "compare": false, 00:11:05.142 "compare_and_write": false, 00:11:05.142 "abort": true, 00:11:05.142 "seek_hole": false, 00:11:05.142 "seek_data": false, 00:11:05.142 "copy": true, 00:11:05.142 "nvme_iov_md": false 00:11:05.142 }, 00:11:05.142 "memory_domains": [ 00:11:05.142 { 00:11:05.142 "dma_device_id": "system", 00:11:05.142 "dma_device_type": 1 00:11:05.142 }, 00:11:05.142 { 00:11:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.142 "dma_device_type": 2 00:11:05.142 } 00:11:05.142 ], 00:11:05.142 "driver_specific": {} 00:11:05.142 } 00:11:05.142 ] 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.142 08:48:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.142 BaseBdev4 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.142 [ 00:11:05.142 { 00:11:05.142 "name": "BaseBdev4", 00:11:05.142 "aliases": [ 00:11:05.142 "05cc723c-c165-41fc-b10b-fe556c31ed28" 00:11:05.142 ], 00:11:05.142 "product_name": "Malloc disk", 00:11:05.142 "block_size": 512, 00:11:05.142 "num_blocks": 65536, 00:11:05.142 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:05.142 "assigned_rate_limits": { 00:11:05.142 "rw_ios_per_sec": 0, 00:11:05.142 "rw_mbytes_per_sec": 0, 00:11:05.142 "r_mbytes_per_sec": 0, 00:11:05.142 "w_mbytes_per_sec": 0 00:11:05.142 }, 00:11:05.142 "claimed": false, 00:11:05.142 "zoned": false, 00:11:05.142 "supported_io_types": { 00:11:05.142 "read": true, 00:11:05.142 "write": true, 00:11:05.142 "unmap": true, 00:11:05.142 "flush": true, 00:11:05.142 "reset": true, 00:11:05.142 "nvme_admin": false, 00:11:05.142 "nvme_io": false, 00:11:05.142 "nvme_io_md": false, 00:11:05.142 "write_zeroes": true, 00:11:05.142 "zcopy": true, 00:11:05.142 "get_zone_info": false, 00:11:05.142 "zone_management": false, 00:11:05.142 "zone_append": false, 00:11:05.142 "compare": false, 00:11:05.142 "compare_and_write": false, 00:11:05.142 "abort": true, 00:11:05.142 "seek_hole": false, 00:11:05.142 "seek_data": false, 00:11:05.142 "copy": true, 00:11:05.142 "nvme_iov_md": false 00:11:05.142 }, 00:11:05.142 "memory_domains": [ 00:11:05.142 { 00:11:05.142 "dma_device_id": "system", 00:11:05.142 "dma_device_type": 1 00:11:05.142 }, 00:11:05.142 { 00:11:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.142 "dma_device_type": 2 00:11:05.142 } 00:11:05.142 ], 00:11:05.142 "driver_specific": {} 00:11:05.142 } 00:11:05.142 ] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.142 [2024-09-28 08:48:43.046954] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.142 [2024-09-28 08:48:43.047056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.142 [2024-09-28 08:48:43.047103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.142 [2024-09-28 08:48:43.049141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.142 [2024-09-28 08:48:43.049249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.142 "name": "Existed_Raid", 00:11:05.142 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:05.142 "strip_size_kb": 64, 00:11:05.142 "state": "configuring", 00:11:05.142 "raid_level": "concat", 00:11:05.142 "superblock": true, 00:11:05.142 "num_base_bdevs": 4, 00:11:05.142 "num_base_bdevs_discovered": 3, 00:11:05.142 "num_base_bdevs_operational": 4, 00:11:05.142 "base_bdevs_list": [ 00:11:05.142 { 00:11:05.142 "name": "BaseBdev1", 00:11:05.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.142 "is_configured": false, 00:11:05.142 "data_offset": 0, 00:11:05.142 "data_size": 0 00:11:05.142 }, 00:11:05.142 { 00:11:05.142 "name": "BaseBdev2", 00:11:05.142 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:05.142 "is_configured": true, 00:11:05.142 "data_offset": 2048, 00:11:05.142 "data_size": 63488 00:11:05.142 }, 00:11:05.142 { 00:11:05.142 "name": "BaseBdev3", 00:11:05.142 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:05.142 "is_configured": true, 00:11:05.142 "data_offset": 2048, 00:11:05.142 "data_size": 63488 00:11:05.142 }, 00:11:05.142 { 00:11:05.142 "name": "BaseBdev4", 00:11:05.142 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:05.142 "is_configured": true, 00:11:05.142 "data_offset": 2048, 00:11:05.142 "data_size": 63488 00:11:05.142 } 00:11:05.142 ] 00:11:05.142 }' 00:11:05.142 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.143 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 [2024-09-28 08:48:43.502143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.710 "name": "Existed_Raid", 00:11:05.710 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:05.710 "strip_size_kb": 64, 00:11:05.710 "state": "configuring", 00:11:05.710 "raid_level": "concat", 00:11:05.710 "superblock": true, 00:11:05.710 "num_base_bdevs": 4, 00:11:05.710 "num_base_bdevs_discovered": 2, 00:11:05.710 "num_base_bdevs_operational": 4, 00:11:05.710 "base_bdevs_list": [ 00:11:05.710 { 00:11:05.710 "name": "BaseBdev1", 00:11:05.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.710 "is_configured": false, 00:11:05.710 "data_offset": 0, 00:11:05.710 "data_size": 0 00:11:05.710 }, 00:11:05.710 { 00:11:05.710 "name": null, 00:11:05.710 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:05.710 "is_configured": false, 00:11:05.710 "data_offset": 0, 00:11:05.710 "data_size": 63488 00:11:05.710 }, 00:11:05.710 { 00:11:05.710 "name": "BaseBdev3", 00:11:05.710 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:05.710 "is_configured": true, 00:11:05.710 "data_offset": 2048, 00:11:05.710 "data_size": 63488 00:11:05.710 }, 00:11:05.710 { 00:11:05.710 "name": "BaseBdev4", 00:11:05.710 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:05.710 "is_configured": true, 00:11:05.710 "data_offset": 2048, 00:11:05.710 "data_size": 63488 00:11:05.710 } 00:11:05.710 ] 00:11:05.710 }' 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.710 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.279 08:48:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.279 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.279 08:48:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 [2024-09-28 08:48:44.075003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.279 BaseBdev1 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.279 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.279 [ 00:11:06.279 { 00:11:06.279 "name": "BaseBdev1", 00:11:06.279 "aliases": [ 00:11:06.279 "17ee34c7-86af-4b23-886a-3c03736fb071" 00:11:06.279 ], 00:11:06.279 "product_name": "Malloc disk", 00:11:06.279 "block_size": 512, 00:11:06.279 "num_blocks": 65536, 00:11:06.279 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:06.279 "assigned_rate_limits": { 00:11:06.279 "rw_ios_per_sec": 0, 00:11:06.279 "rw_mbytes_per_sec": 0, 00:11:06.279 "r_mbytes_per_sec": 0, 00:11:06.279 "w_mbytes_per_sec": 0 00:11:06.279 }, 00:11:06.279 "claimed": true, 00:11:06.279 "claim_type": "exclusive_write", 00:11:06.279 "zoned": false, 00:11:06.279 "supported_io_types": { 00:11:06.279 "read": true, 00:11:06.279 "write": true, 00:11:06.279 "unmap": true, 00:11:06.279 "flush": true, 00:11:06.279 "reset": true, 00:11:06.279 "nvme_admin": false, 00:11:06.279 "nvme_io": false, 00:11:06.279 "nvme_io_md": false, 00:11:06.279 "write_zeroes": true, 00:11:06.279 "zcopy": true, 00:11:06.280 "get_zone_info": false, 00:11:06.280 "zone_management": false, 00:11:06.280 "zone_append": false, 00:11:06.280 "compare": false, 00:11:06.280 "compare_and_write": false, 00:11:06.280 "abort": true, 00:11:06.280 "seek_hole": false, 00:11:06.280 "seek_data": false, 00:11:06.280 "copy": true, 00:11:06.280 "nvme_iov_md": false 00:11:06.280 }, 00:11:06.280 "memory_domains": [ 00:11:06.280 { 00:11:06.280 "dma_device_id": "system", 00:11:06.280 "dma_device_type": 1 00:11:06.280 }, 00:11:06.280 { 00:11:06.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.280 "dma_device_type": 2 00:11:06.280 } 00:11:06.280 ], 00:11:06.280 "driver_specific": {} 00:11:06.280 } 00:11:06.280 ] 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.280 "name": "Existed_Raid", 00:11:06.280 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:06.280 "strip_size_kb": 64, 00:11:06.280 "state": "configuring", 00:11:06.280 "raid_level": "concat", 00:11:06.280 "superblock": true, 00:11:06.280 "num_base_bdevs": 4, 00:11:06.280 "num_base_bdevs_discovered": 3, 00:11:06.280 "num_base_bdevs_operational": 4, 00:11:06.280 "base_bdevs_list": [ 00:11:06.280 { 00:11:06.280 "name": "BaseBdev1", 00:11:06.280 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:06.280 "is_configured": true, 00:11:06.280 "data_offset": 2048, 00:11:06.280 "data_size": 63488 00:11:06.280 }, 00:11:06.280 { 00:11:06.280 "name": null, 00:11:06.280 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:06.280 "is_configured": false, 00:11:06.280 "data_offset": 0, 00:11:06.280 "data_size": 63488 00:11:06.280 }, 00:11:06.280 { 00:11:06.280 "name": "BaseBdev3", 00:11:06.280 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:06.280 "is_configured": true, 00:11:06.280 "data_offset": 2048, 00:11:06.280 "data_size": 63488 00:11:06.280 }, 00:11:06.280 { 00:11:06.280 "name": "BaseBdev4", 00:11:06.280 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:06.280 "is_configured": true, 00:11:06.280 "data_offset": 2048, 00:11:06.280 "data_size": 63488 00:11:06.280 } 00:11:06.280 ] 00:11:06.280 }' 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.280 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.847 [2024-09-28 08:48:44.594155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.847 "name": "Existed_Raid", 00:11:06.847 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:06.847 "strip_size_kb": 64, 00:11:06.847 "state": "configuring", 00:11:06.847 "raid_level": "concat", 00:11:06.847 "superblock": true, 00:11:06.847 "num_base_bdevs": 4, 00:11:06.847 "num_base_bdevs_discovered": 2, 00:11:06.847 "num_base_bdevs_operational": 4, 00:11:06.847 "base_bdevs_list": [ 00:11:06.847 { 00:11:06.847 "name": "BaseBdev1", 00:11:06.847 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:06.847 "is_configured": true, 00:11:06.847 "data_offset": 2048, 00:11:06.847 "data_size": 63488 00:11:06.847 }, 00:11:06.847 { 00:11:06.847 "name": null, 00:11:06.847 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:06.847 "is_configured": false, 00:11:06.847 "data_offset": 0, 00:11:06.847 "data_size": 63488 00:11:06.847 }, 00:11:06.847 { 00:11:06.847 "name": null, 00:11:06.847 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:06.847 "is_configured": false, 00:11:06.847 "data_offset": 0, 00:11:06.847 "data_size": 63488 00:11:06.847 }, 00:11:06.847 { 00:11:06.847 "name": "BaseBdev4", 00:11:06.847 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:06.847 "is_configured": true, 00:11:06.847 "data_offset": 2048, 00:11:06.847 "data_size": 63488 00:11:06.847 } 00:11:06.847 ] 00:11:06.847 }' 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.847 08:48:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.106 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.106 [2024-09-28 08:48:45.097350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.365 "name": "Existed_Raid", 00:11:07.365 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:07.365 "strip_size_kb": 64, 00:11:07.365 "state": "configuring", 00:11:07.365 "raid_level": "concat", 00:11:07.365 "superblock": true, 00:11:07.365 "num_base_bdevs": 4, 00:11:07.365 "num_base_bdevs_discovered": 3, 00:11:07.365 "num_base_bdevs_operational": 4, 00:11:07.365 "base_bdevs_list": [ 00:11:07.365 { 00:11:07.365 "name": "BaseBdev1", 00:11:07.365 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:07.365 "is_configured": true, 00:11:07.365 "data_offset": 2048, 00:11:07.365 "data_size": 63488 00:11:07.365 }, 00:11:07.365 { 00:11:07.365 "name": null, 00:11:07.365 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:07.365 "is_configured": false, 00:11:07.365 "data_offset": 0, 00:11:07.365 "data_size": 63488 00:11:07.365 }, 00:11:07.365 { 00:11:07.365 "name": "BaseBdev3", 00:11:07.365 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:07.365 "is_configured": true, 00:11:07.365 "data_offset": 2048, 00:11:07.365 "data_size": 63488 00:11:07.365 }, 00:11:07.365 { 00:11:07.365 "name": "BaseBdev4", 00:11:07.365 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:07.365 "is_configured": true, 00:11:07.365 "data_offset": 2048, 00:11:07.365 "data_size": 63488 00:11:07.365 } 00:11:07.365 ] 00:11:07.365 }' 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.365 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.624 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 [2024-09-28 08:48:45.572524] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.882 "name": "Existed_Raid", 00:11:07.882 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:07.882 "strip_size_kb": 64, 00:11:07.882 "state": "configuring", 00:11:07.882 "raid_level": "concat", 00:11:07.882 "superblock": true, 00:11:07.882 "num_base_bdevs": 4, 00:11:07.882 "num_base_bdevs_discovered": 2, 00:11:07.882 "num_base_bdevs_operational": 4, 00:11:07.882 "base_bdevs_list": [ 00:11:07.882 { 00:11:07.882 "name": null, 00:11:07.882 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:07.882 "is_configured": false, 00:11:07.882 "data_offset": 0, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": null, 00:11:07.882 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:07.882 "is_configured": false, 00:11:07.882 "data_offset": 0, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": "BaseBdev3", 00:11:07.882 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 }, 00:11:07.882 { 00:11:07.882 "name": "BaseBdev4", 00:11:07.882 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:07.882 "is_configured": true, 00:11:07.882 "data_offset": 2048, 00:11:07.882 "data_size": 63488 00:11:07.882 } 00:11:07.882 ] 00:11:07.882 }' 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.882 08:48:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.141 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.141 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.141 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.141 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.401 [2024-09-28 08:48:46.187299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.401 "name": "Existed_Raid", 00:11:08.401 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:08.401 "strip_size_kb": 64, 00:11:08.401 "state": "configuring", 00:11:08.401 "raid_level": "concat", 00:11:08.401 "superblock": true, 00:11:08.401 "num_base_bdevs": 4, 00:11:08.401 "num_base_bdevs_discovered": 3, 00:11:08.401 "num_base_bdevs_operational": 4, 00:11:08.401 "base_bdevs_list": [ 00:11:08.401 { 00:11:08.401 "name": null, 00:11:08.401 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:08.401 "is_configured": false, 00:11:08.401 "data_offset": 0, 00:11:08.401 "data_size": 63488 00:11:08.401 }, 00:11:08.401 { 00:11:08.401 "name": "BaseBdev2", 00:11:08.401 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:08.401 "is_configured": true, 00:11:08.401 "data_offset": 2048, 00:11:08.401 "data_size": 63488 00:11:08.401 }, 00:11:08.401 { 00:11:08.401 "name": "BaseBdev3", 00:11:08.401 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:08.401 "is_configured": true, 00:11:08.401 "data_offset": 2048, 00:11:08.401 "data_size": 63488 00:11:08.401 }, 00:11:08.401 { 00:11:08.401 "name": "BaseBdev4", 00:11:08.401 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:08.401 "is_configured": true, 00:11:08.401 "data_offset": 2048, 00:11:08.401 "data_size": 63488 00:11:08.401 } 00:11:08.401 ] 00:11:08.401 }' 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.401 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.661 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 17ee34c7-86af-4b23-886a-3c03736fb071 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [2024-09-28 08:48:46.715743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:08.921 [2024-09-28 08:48:46.716014] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.921 [2024-09-28 08:48:46.716033] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.921 [2024-09-28 08:48:46.716337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:08.921 [2024-09-28 08:48:46.716480] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.921 [2024-09-28 08:48:46.716492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:08.921 [2024-09-28 08:48:46.716627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.921 NewBaseBdev 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 [ 00:11:08.921 { 00:11:08.921 "name": "NewBaseBdev", 00:11:08.921 "aliases": [ 00:11:08.921 "17ee34c7-86af-4b23-886a-3c03736fb071" 00:11:08.921 ], 00:11:08.921 "product_name": "Malloc disk", 00:11:08.921 "block_size": 512, 00:11:08.921 "num_blocks": 65536, 00:11:08.921 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:08.921 "assigned_rate_limits": { 00:11:08.921 "rw_ios_per_sec": 0, 00:11:08.921 "rw_mbytes_per_sec": 0, 00:11:08.921 "r_mbytes_per_sec": 0, 00:11:08.921 "w_mbytes_per_sec": 0 00:11:08.921 }, 00:11:08.921 "claimed": true, 00:11:08.921 "claim_type": "exclusive_write", 00:11:08.921 "zoned": false, 00:11:08.921 "supported_io_types": { 00:11:08.921 "read": true, 00:11:08.921 "write": true, 00:11:08.921 "unmap": true, 00:11:08.921 "flush": true, 00:11:08.921 "reset": true, 00:11:08.921 "nvme_admin": false, 00:11:08.921 "nvme_io": false, 00:11:08.921 "nvme_io_md": false, 00:11:08.921 "write_zeroes": true, 00:11:08.921 "zcopy": true, 00:11:08.921 "get_zone_info": false, 00:11:08.921 "zone_management": false, 00:11:08.921 "zone_append": false, 00:11:08.921 "compare": false, 00:11:08.921 "compare_and_write": false, 00:11:08.921 "abort": true, 00:11:08.921 "seek_hole": false, 00:11:08.921 "seek_data": false, 00:11:08.921 "copy": true, 00:11:08.921 "nvme_iov_md": false 00:11:08.921 }, 00:11:08.921 "memory_domains": [ 00:11:08.921 { 00:11:08.921 "dma_device_id": "system", 00:11:08.921 "dma_device_type": 1 00:11:08.921 }, 00:11:08.921 { 00:11:08.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.921 "dma_device_type": 2 00:11:08.921 } 00:11:08.921 ], 00:11:08.921 "driver_specific": {} 00:11:08.921 } 00:11:08.921 ] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.921 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.921 "name": "Existed_Raid", 00:11:08.922 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:08.922 "strip_size_kb": 64, 00:11:08.922 "state": "online", 00:11:08.922 "raid_level": "concat", 00:11:08.922 "superblock": true, 00:11:08.922 "num_base_bdevs": 4, 00:11:08.922 "num_base_bdevs_discovered": 4, 00:11:08.922 "num_base_bdevs_operational": 4, 00:11:08.922 "base_bdevs_list": [ 00:11:08.922 { 00:11:08.922 "name": "NewBaseBdev", 00:11:08.922 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 }, 00:11:08.922 { 00:11:08.922 "name": "BaseBdev2", 00:11:08.922 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 }, 00:11:08.922 { 00:11:08.922 "name": "BaseBdev3", 00:11:08.922 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 }, 00:11:08.922 { 00:11:08.922 "name": "BaseBdev4", 00:11:08.922 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:08.922 "is_configured": true, 00:11:08.922 "data_offset": 2048, 00:11:08.922 "data_size": 63488 00:11:08.922 } 00:11:08.922 ] 00:11:08.922 }' 00:11:08.922 08:48:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.922 08:48:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.490 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.490 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.490 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.491 [2024-09-28 08:48:47.191482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.491 "name": "Existed_Raid", 00:11:09.491 "aliases": [ 00:11:09.491 "25a3d42d-318f-45b8-9a6b-015c931b3d97" 00:11:09.491 ], 00:11:09.491 "product_name": "Raid Volume", 00:11:09.491 "block_size": 512, 00:11:09.491 "num_blocks": 253952, 00:11:09.491 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:09.491 "assigned_rate_limits": { 00:11:09.491 "rw_ios_per_sec": 0, 00:11:09.491 "rw_mbytes_per_sec": 0, 00:11:09.491 "r_mbytes_per_sec": 0, 00:11:09.491 "w_mbytes_per_sec": 0 00:11:09.491 }, 00:11:09.491 "claimed": false, 00:11:09.491 "zoned": false, 00:11:09.491 "supported_io_types": { 00:11:09.491 "read": true, 00:11:09.491 "write": true, 00:11:09.491 "unmap": true, 00:11:09.491 "flush": true, 00:11:09.491 "reset": true, 00:11:09.491 "nvme_admin": false, 00:11:09.491 "nvme_io": false, 00:11:09.491 "nvme_io_md": false, 00:11:09.491 "write_zeroes": true, 00:11:09.491 "zcopy": false, 00:11:09.491 "get_zone_info": false, 00:11:09.491 "zone_management": false, 00:11:09.491 "zone_append": false, 00:11:09.491 "compare": false, 00:11:09.491 "compare_and_write": false, 00:11:09.491 "abort": false, 00:11:09.491 "seek_hole": false, 00:11:09.491 "seek_data": false, 00:11:09.491 "copy": false, 00:11:09.491 "nvme_iov_md": false 00:11:09.491 }, 00:11:09.491 "memory_domains": [ 00:11:09.491 { 00:11:09.491 "dma_device_id": "system", 00:11:09.491 "dma_device_type": 1 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.491 "dma_device_type": 2 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "system", 00:11:09.491 "dma_device_type": 1 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.491 "dma_device_type": 2 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "system", 00:11:09.491 "dma_device_type": 1 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.491 "dma_device_type": 2 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "system", 00:11:09.491 "dma_device_type": 1 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.491 "dma_device_type": 2 00:11:09.491 } 00:11:09.491 ], 00:11:09.491 "driver_specific": { 00:11:09.491 "raid": { 00:11:09.491 "uuid": "25a3d42d-318f-45b8-9a6b-015c931b3d97", 00:11:09.491 "strip_size_kb": 64, 00:11:09.491 "state": "online", 00:11:09.491 "raid_level": "concat", 00:11:09.491 "superblock": true, 00:11:09.491 "num_base_bdevs": 4, 00:11:09.491 "num_base_bdevs_discovered": 4, 00:11:09.491 "num_base_bdevs_operational": 4, 00:11:09.491 "base_bdevs_list": [ 00:11:09.491 { 00:11:09.491 "name": "NewBaseBdev", 00:11:09.491 "uuid": "17ee34c7-86af-4b23-886a-3c03736fb071", 00:11:09.491 "is_configured": true, 00:11:09.491 "data_offset": 2048, 00:11:09.491 "data_size": 63488 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "name": "BaseBdev2", 00:11:09.491 "uuid": "a60db030-b1ee-4aea-b270-fd5c112e8682", 00:11:09.491 "is_configured": true, 00:11:09.491 "data_offset": 2048, 00:11:09.491 "data_size": 63488 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "name": "BaseBdev3", 00:11:09.491 "uuid": "6485ca1f-1f42-4af1-aca2-d87bc5c95d96", 00:11:09.491 "is_configured": true, 00:11:09.491 "data_offset": 2048, 00:11:09.491 "data_size": 63488 00:11:09.491 }, 00:11:09.491 { 00:11:09.491 "name": "BaseBdev4", 00:11:09.491 "uuid": "05cc723c-c165-41fc-b10b-fe556c31ed28", 00:11:09.491 "is_configured": true, 00:11:09.491 "data_offset": 2048, 00:11:09.491 "data_size": 63488 00:11:09.491 } 00:11:09.491 ] 00:11:09.491 } 00:11:09.491 } 00:11:09.491 }' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.491 BaseBdev2 00:11:09.491 BaseBdev3 00:11:09.491 BaseBdev4' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.491 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.492 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.752 [2024-09-28 08:48:47.514553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.752 [2024-09-28 08:48:47.514586] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.752 [2024-09-28 08:48:47.514680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.752 [2024-09-28 08:48:47.514757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.752 [2024-09-28 08:48:47.514768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71961 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71961 ']' 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71961 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71961 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71961' 00:11:09.752 killing process with pid 71961 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71961 00:11:09.752 [2024-09-28 08:48:47.559962] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.752 08:48:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71961 00:11:10.011 [2024-09-28 08:48:47.970269] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.392 08:48:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:11.392 00:11:11.392 real 0m11.840s 00:11:11.392 user 0m18.466s 00:11:11.392 sys 0m2.271s 00:11:11.392 08:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.392 08:48:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.392 ************************************ 00:11:11.392 END TEST raid_state_function_test_sb 00:11:11.392 ************************************ 00:11:11.392 08:48:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:11.392 08:48:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:11.392 08:48:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.392 08:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.392 ************************************ 00:11:11.392 START TEST raid_superblock_test 00:11:11.392 ************************************ 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:11.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72632 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72632 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72632 ']' 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.392 08:48:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.662 [2024-09-28 08:48:49.478159] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:11.662 [2024-09-28 08:48:49.478352] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72632 ] 00:11:11.662 [2024-09-28 08:48:49.648222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.932 [2024-09-28 08:48:49.888922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.191 [2024-09-28 08:48:50.114563] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.191 [2024-09-28 08:48:50.114705] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.451 malloc1 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.451 [2024-09-28 08:48:50.359424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:12.451 [2024-09-28 08:48:50.359553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.451 [2024-09-28 08:48:50.359602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.451 [2024-09-28 08:48:50.359639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.451 [2024-09-28 08:48:50.362046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.451 [2024-09-28 08:48:50.362114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:12.451 pt1 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:12.451 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.452 malloc2 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.452 [2024-09-28 08:48:50.430985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.452 [2024-09-28 08:48:50.431079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.452 [2024-09-28 08:48:50.431145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:12.452 [2024-09-28 08:48:50.431174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.452 [2024-09-28 08:48:50.433549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.452 [2024-09-28 08:48:50.433617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.452 pt2 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.452 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.711 malloc3 00:11:12.711 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.711 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.712 [2024-09-28 08:48:50.496874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:12.712 [2024-09-28 08:48:50.496977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.712 [2024-09-28 08:48:50.497016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:12.712 [2024-09-28 08:48:50.497044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.712 [2024-09-28 08:48:50.499371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.712 [2024-09-28 08:48:50.499438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:12.712 pt3 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.712 malloc4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.712 [2024-09-28 08:48:50.564341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:12.712 [2024-09-28 08:48:50.564447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.712 [2024-09-28 08:48:50.564494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:12.712 [2024-09-28 08:48:50.564540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.712 [2024-09-28 08:48:50.566859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.712 [2024-09-28 08:48:50.566885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:12.712 pt4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.712 [2024-09-28 08:48:50.576374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:12.712 [2024-09-28 08:48:50.578413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.712 [2024-09-28 08:48:50.578515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:12.712 [2024-09-28 08:48:50.578610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:12.712 [2024-09-28 08:48:50.578848] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:12.712 [2024-09-28 08:48:50.578902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.712 [2024-09-28 08:48:50.579180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.712 [2024-09-28 08:48:50.579378] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:12.712 [2024-09-28 08:48:50.579425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:12.712 [2024-09-28 08:48:50.579609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.712 "name": "raid_bdev1", 00:11:12.712 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:12.712 "strip_size_kb": 64, 00:11:12.712 "state": "online", 00:11:12.712 "raid_level": "concat", 00:11:12.712 "superblock": true, 00:11:12.712 "num_base_bdevs": 4, 00:11:12.712 "num_base_bdevs_discovered": 4, 00:11:12.712 "num_base_bdevs_operational": 4, 00:11:12.712 "base_bdevs_list": [ 00:11:12.712 { 00:11:12.712 "name": "pt1", 00:11:12.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.712 "is_configured": true, 00:11:12.712 "data_offset": 2048, 00:11:12.712 "data_size": 63488 00:11:12.712 }, 00:11:12.712 { 00:11:12.712 "name": "pt2", 00:11:12.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.712 "is_configured": true, 00:11:12.712 "data_offset": 2048, 00:11:12.712 "data_size": 63488 00:11:12.712 }, 00:11:12.712 { 00:11:12.712 "name": "pt3", 00:11:12.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.712 "is_configured": true, 00:11:12.712 "data_offset": 2048, 00:11:12.712 "data_size": 63488 00:11:12.712 }, 00:11:12.712 { 00:11:12.712 "name": "pt4", 00:11:12.712 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:12.712 "is_configured": true, 00:11:12.712 "data_offset": 2048, 00:11:12.712 "data_size": 63488 00:11:12.712 } 00:11:12.712 ] 00:11:12.712 }' 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.712 08:48:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.282 [2024-09-28 08:48:51.031898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.282 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.282 "name": "raid_bdev1", 00:11:13.282 "aliases": [ 00:11:13.282 "9ffc6f00-f6dd-4e84-9aee-fb411322d129" 00:11:13.282 ], 00:11:13.282 "product_name": "Raid Volume", 00:11:13.282 "block_size": 512, 00:11:13.282 "num_blocks": 253952, 00:11:13.282 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:13.282 "assigned_rate_limits": { 00:11:13.282 "rw_ios_per_sec": 0, 00:11:13.282 "rw_mbytes_per_sec": 0, 00:11:13.282 "r_mbytes_per_sec": 0, 00:11:13.282 "w_mbytes_per_sec": 0 00:11:13.282 }, 00:11:13.282 "claimed": false, 00:11:13.282 "zoned": false, 00:11:13.282 "supported_io_types": { 00:11:13.282 "read": true, 00:11:13.282 "write": true, 00:11:13.282 "unmap": true, 00:11:13.282 "flush": true, 00:11:13.282 "reset": true, 00:11:13.282 "nvme_admin": false, 00:11:13.282 "nvme_io": false, 00:11:13.282 "nvme_io_md": false, 00:11:13.282 "write_zeroes": true, 00:11:13.282 "zcopy": false, 00:11:13.282 "get_zone_info": false, 00:11:13.282 "zone_management": false, 00:11:13.282 "zone_append": false, 00:11:13.282 "compare": false, 00:11:13.282 "compare_and_write": false, 00:11:13.282 "abort": false, 00:11:13.282 "seek_hole": false, 00:11:13.282 "seek_data": false, 00:11:13.282 "copy": false, 00:11:13.282 "nvme_iov_md": false 00:11:13.282 }, 00:11:13.282 "memory_domains": [ 00:11:13.282 { 00:11:13.282 "dma_device_id": "system", 00:11:13.282 "dma_device_type": 1 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.282 "dma_device_type": 2 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "system", 00:11:13.282 "dma_device_type": 1 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.282 "dma_device_type": 2 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "system", 00:11:13.282 "dma_device_type": 1 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.282 "dma_device_type": 2 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "system", 00:11:13.282 "dma_device_type": 1 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.282 "dma_device_type": 2 00:11:13.282 } 00:11:13.282 ], 00:11:13.282 "driver_specific": { 00:11:13.282 "raid": { 00:11:13.282 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:13.282 "strip_size_kb": 64, 00:11:13.282 "state": "online", 00:11:13.282 "raid_level": "concat", 00:11:13.282 "superblock": true, 00:11:13.282 "num_base_bdevs": 4, 00:11:13.282 "num_base_bdevs_discovered": 4, 00:11:13.282 "num_base_bdevs_operational": 4, 00:11:13.282 "base_bdevs_list": [ 00:11:13.282 { 00:11:13.282 "name": "pt1", 00:11:13.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.282 "is_configured": true, 00:11:13.282 "data_offset": 2048, 00:11:13.282 "data_size": 63488 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "name": "pt2", 00:11:13.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.282 "is_configured": true, 00:11:13.282 "data_offset": 2048, 00:11:13.282 "data_size": 63488 00:11:13.282 }, 00:11:13.282 { 00:11:13.282 "name": "pt3", 00:11:13.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.283 "is_configured": true, 00:11:13.283 "data_offset": 2048, 00:11:13.283 "data_size": 63488 00:11:13.283 }, 00:11:13.283 { 00:11:13.283 "name": "pt4", 00:11:13.283 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.283 "is_configured": true, 00:11:13.283 "data_offset": 2048, 00:11:13.283 "data_size": 63488 00:11:13.283 } 00:11:13.283 ] 00:11:13.283 } 00:11:13.283 } 00:11:13.283 }' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:13.283 pt2 00:11:13.283 pt3 00:11:13.283 pt4' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.283 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 [2024-09-28 08:48:51.339277] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ffc6f00-f6dd-4e84-9aee-fb411322d129 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ffc6f00-f6dd-4e84-9aee-fb411322d129 ']' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 [2024-09-28 08:48:51.374936] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.543 [2024-09-28 08:48:51.374966] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.543 [2024-09-28 08:48:51.375043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.543 [2024-09-28 08:48:51.375121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.543 [2024-09-28 08:48:51.375139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.543 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:13.544 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:13.544 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:13.544 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.544 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.544 [2024-09-28 08:48:51.534682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:13.544 [2024-09-28 08:48:51.536856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:13.544 [2024-09-28 08:48:51.536906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:13.544 [2024-09-28 08:48:51.536940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:13.544 [2024-09-28 08:48:51.536990] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:13.544 [2024-09-28 08:48:51.537053] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:13.544 [2024-09-28 08:48:51.537077] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:13.544 [2024-09-28 08:48:51.537096] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:13.544 [2024-09-28 08:48:51.537110] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.544 [2024-09-28 08:48:51.537121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:13.803 request: 00:11:13.803 { 00:11:13.804 "name": "raid_bdev1", 00:11:13.804 "raid_level": "concat", 00:11:13.804 "base_bdevs": [ 00:11:13.804 "malloc1", 00:11:13.804 "malloc2", 00:11:13.804 "malloc3", 00:11:13.804 "malloc4" 00:11:13.804 ], 00:11:13.804 "strip_size_kb": 64, 00:11:13.804 "superblock": false, 00:11:13.804 "method": "bdev_raid_create", 00:11:13.804 "req_id": 1 00:11:13.804 } 00:11:13.804 Got JSON-RPC error response 00:11:13.804 response: 00:11:13.804 { 00:11:13.804 "code": -17, 00:11:13.804 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:13.804 } 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.804 [2024-09-28 08:48:51.602532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.804 [2024-09-28 08:48:51.602581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.804 [2024-09-28 08:48:51.602615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.804 [2024-09-28 08:48:51.602626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.804 [2024-09-28 08:48:51.605060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.804 [2024-09-28 08:48:51.605098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.804 [2024-09-28 08:48:51.605182] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:13.804 [2024-09-28 08:48:51.605252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.804 pt1 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.804 "name": "raid_bdev1", 00:11:13.804 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:13.804 "strip_size_kb": 64, 00:11:13.804 "state": "configuring", 00:11:13.804 "raid_level": "concat", 00:11:13.804 "superblock": true, 00:11:13.804 "num_base_bdevs": 4, 00:11:13.804 "num_base_bdevs_discovered": 1, 00:11:13.804 "num_base_bdevs_operational": 4, 00:11:13.804 "base_bdevs_list": [ 00:11:13.804 { 00:11:13.804 "name": "pt1", 00:11:13.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.804 "is_configured": true, 00:11:13.804 "data_offset": 2048, 00:11:13.804 "data_size": 63488 00:11:13.804 }, 00:11:13.804 { 00:11:13.804 "name": null, 00:11:13.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.804 "is_configured": false, 00:11:13.804 "data_offset": 2048, 00:11:13.804 "data_size": 63488 00:11:13.804 }, 00:11:13.804 { 00:11:13.804 "name": null, 00:11:13.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.804 "is_configured": false, 00:11:13.804 "data_offset": 2048, 00:11:13.804 "data_size": 63488 00:11:13.804 }, 00:11:13.804 { 00:11:13.804 "name": null, 00:11:13.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:13.804 "is_configured": false, 00:11:13.804 "data_offset": 2048, 00:11:13.804 "data_size": 63488 00:11:13.804 } 00:11:13.804 ] 00:11:13.804 }' 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.804 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.063 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:14.063 08:48:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.063 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.063 08:48:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.063 [2024-09-28 08:48:52.005860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.063 [2024-09-28 08:48:52.005929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.063 [2024-09-28 08:48:52.005951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:14.063 [2024-09-28 08:48:52.005963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.063 [2024-09-28 08:48:52.006448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.063 [2024-09-28 08:48:52.006478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.063 [2024-09-28 08:48:52.006568] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:14.064 [2024-09-28 08:48:52.006601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.064 pt2 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.064 [2024-09-28 08:48:52.017846] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.064 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.323 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.323 "name": "raid_bdev1", 00:11:14.323 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:14.323 "strip_size_kb": 64, 00:11:14.323 "state": "configuring", 00:11:14.323 "raid_level": "concat", 00:11:14.323 "superblock": true, 00:11:14.323 "num_base_bdevs": 4, 00:11:14.323 "num_base_bdevs_discovered": 1, 00:11:14.323 "num_base_bdevs_operational": 4, 00:11:14.323 "base_bdevs_list": [ 00:11:14.323 { 00:11:14.323 "name": "pt1", 00:11:14.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.323 "is_configured": true, 00:11:14.323 "data_offset": 2048, 00:11:14.323 "data_size": 63488 00:11:14.323 }, 00:11:14.323 { 00:11:14.323 "name": null, 00:11:14.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.323 "is_configured": false, 00:11:14.323 "data_offset": 0, 00:11:14.323 "data_size": 63488 00:11:14.323 }, 00:11:14.323 { 00:11:14.323 "name": null, 00:11:14.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.323 "is_configured": false, 00:11:14.323 "data_offset": 2048, 00:11:14.323 "data_size": 63488 00:11:14.323 }, 00:11:14.323 { 00:11:14.323 "name": null, 00:11:14.323 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.323 "is_configured": false, 00:11:14.323 "data_offset": 2048, 00:11:14.323 "data_size": 63488 00:11:14.323 } 00:11:14.323 ] 00:11:14.323 }' 00:11:14.323 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.323 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.582 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:14.582 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.582 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.582 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.582 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.582 [2024-09-28 08:48:52.477049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.582 [2024-09-28 08:48:52.477107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.582 [2024-09-28 08:48:52.477146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:14.582 [2024-09-28 08:48:52.477155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.582 [2024-09-28 08:48:52.477624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.583 [2024-09-28 08:48:52.477663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.583 [2024-09-28 08:48:52.477753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:14.583 [2024-09-28 08:48:52.477798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.583 pt2 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.583 [2024-09-28 08:48:52.489008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:14.583 [2024-09-28 08:48:52.489057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.583 [2024-09-28 08:48:52.489089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:14.583 [2024-09-28 08:48:52.489100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.583 [2024-09-28 08:48:52.489501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.583 [2024-09-28 08:48:52.489525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:14.583 [2024-09-28 08:48:52.489593] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:14.583 [2024-09-28 08:48:52.489616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:14.583 pt3 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.583 [2024-09-28 08:48:52.500967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:14.583 [2024-09-28 08:48:52.501012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.583 [2024-09-28 08:48:52.501047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:14.583 [2024-09-28 08:48:52.501055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.583 [2024-09-28 08:48:52.501420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.583 [2024-09-28 08:48:52.501443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:14.583 [2024-09-28 08:48:52.501503] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:14.583 [2024-09-28 08:48:52.501527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:14.583 [2024-09-28 08:48:52.501666] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.583 [2024-09-28 08:48:52.501678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.583 [2024-09-28 08:48:52.501936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:14.583 [2024-09-28 08:48:52.502088] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.583 [2024-09-28 08:48:52.502104] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:14.583 [2024-09-28 08:48:52.502224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.583 pt4 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.583 "name": "raid_bdev1", 00:11:14.583 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:14.583 "strip_size_kb": 64, 00:11:14.583 "state": "online", 00:11:14.583 "raid_level": "concat", 00:11:14.583 "superblock": true, 00:11:14.583 "num_base_bdevs": 4, 00:11:14.583 "num_base_bdevs_discovered": 4, 00:11:14.583 "num_base_bdevs_operational": 4, 00:11:14.583 "base_bdevs_list": [ 00:11:14.583 { 00:11:14.583 "name": "pt1", 00:11:14.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.583 "is_configured": true, 00:11:14.583 "data_offset": 2048, 00:11:14.583 "data_size": 63488 00:11:14.583 }, 00:11:14.583 { 00:11:14.583 "name": "pt2", 00:11:14.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.583 "is_configured": true, 00:11:14.583 "data_offset": 2048, 00:11:14.583 "data_size": 63488 00:11:14.583 }, 00:11:14.583 { 00:11:14.583 "name": "pt3", 00:11:14.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:14.583 "is_configured": true, 00:11:14.583 "data_offset": 2048, 00:11:14.583 "data_size": 63488 00:11:14.583 }, 00:11:14.583 { 00:11:14.583 "name": "pt4", 00:11:14.583 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:14.583 "is_configured": true, 00:11:14.583 "data_offset": 2048, 00:11:14.583 "data_size": 63488 00:11:14.583 } 00:11:14.583 ] 00:11:14.583 }' 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.583 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.153 [2024-09-28 08:48:52.916625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.153 "name": "raid_bdev1", 00:11:15.153 "aliases": [ 00:11:15.153 "9ffc6f00-f6dd-4e84-9aee-fb411322d129" 00:11:15.153 ], 00:11:15.153 "product_name": "Raid Volume", 00:11:15.153 "block_size": 512, 00:11:15.153 "num_blocks": 253952, 00:11:15.153 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:15.153 "assigned_rate_limits": { 00:11:15.153 "rw_ios_per_sec": 0, 00:11:15.153 "rw_mbytes_per_sec": 0, 00:11:15.153 "r_mbytes_per_sec": 0, 00:11:15.153 "w_mbytes_per_sec": 0 00:11:15.153 }, 00:11:15.153 "claimed": false, 00:11:15.153 "zoned": false, 00:11:15.153 "supported_io_types": { 00:11:15.153 "read": true, 00:11:15.153 "write": true, 00:11:15.153 "unmap": true, 00:11:15.153 "flush": true, 00:11:15.153 "reset": true, 00:11:15.153 "nvme_admin": false, 00:11:15.153 "nvme_io": false, 00:11:15.153 "nvme_io_md": false, 00:11:15.153 "write_zeroes": true, 00:11:15.153 "zcopy": false, 00:11:15.153 "get_zone_info": false, 00:11:15.153 "zone_management": false, 00:11:15.153 "zone_append": false, 00:11:15.153 "compare": false, 00:11:15.153 "compare_and_write": false, 00:11:15.153 "abort": false, 00:11:15.153 "seek_hole": false, 00:11:15.153 "seek_data": false, 00:11:15.153 "copy": false, 00:11:15.153 "nvme_iov_md": false 00:11:15.153 }, 00:11:15.153 "memory_domains": [ 00:11:15.153 { 00:11:15.153 "dma_device_id": "system", 00:11:15.153 "dma_device_type": 1 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.153 "dma_device_type": 2 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "system", 00:11:15.153 "dma_device_type": 1 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.153 "dma_device_type": 2 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "system", 00:11:15.153 "dma_device_type": 1 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.153 "dma_device_type": 2 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "system", 00:11:15.153 "dma_device_type": 1 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.153 "dma_device_type": 2 00:11:15.153 } 00:11:15.153 ], 00:11:15.153 "driver_specific": { 00:11:15.153 "raid": { 00:11:15.153 "uuid": "9ffc6f00-f6dd-4e84-9aee-fb411322d129", 00:11:15.153 "strip_size_kb": 64, 00:11:15.153 "state": "online", 00:11:15.153 "raid_level": "concat", 00:11:15.153 "superblock": true, 00:11:15.153 "num_base_bdevs": 4, 00:11:15.153 "num_base_bdevs_discovered": 4, 00:11:15.153 "num_base_bdevs_operational": 4, 00:11:15.153 "base_bdevs_list": [ 00:11:15.153 { 00:11:15.153 "name": "pt1", 00:11:15.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.153 "is_configured": true, 00:11:15.153 "data_offset": 2048, 00:11:15.153 "data_size": 63488 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "name": "pt2", 00:11:15.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.153 "is_configured": true, 00:11:15.153 "data_offset": 2048, 00:11:15.153 "data_size": 63488 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "name": "pt3", 00:11:15.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.153 "is_configured": true, 00:11:15.153 "data_offset": 2048, 00:11:15.153 "data_size": 63488 00:11:15.153 }, 00:11:15.153 { 00:11:15.153 "name": "pt4", 00:11:15.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.153 "is_configured": true, 00:11:15.153 "data_offset": 2048, 00:11:15.153 "data_size": 63488 00:11:15.153 } 00:11:15.153 ] 00:11:15.153 } 00:11:15.153 } 00:11:15.153 }' 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.153 pt2 00:11:15.153 pt3 00:11:15.153 pt4' 00:11:15.153 08:48:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.153 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.414 [2024-09-28 08:48:53.204048] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ffc6f00-f6dd-4e84-9aee-fb411322d129 '!=' 9ffc6f00-f6dd-4e84-9aee-fb411322d129 ']' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72632 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72632 ']' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72632 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72632 00:11:15.414 killing process with pid 72632 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72632' 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72632 00:11:15.414 [2024-09-28 08:48:53.285528] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.414 [2024-09-28 08:48:53.285617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.414 [2024-09-28 08:48:53.285706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.414 [2024-09-28 08:48:53.285717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:15.414 08:48:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72632 00:11:15.984 [2024-09-28 08:48:53.697592] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.365 08:48:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:17.365 00:11:17.365 real 0m5.636s 00:11:17.365 user 0m7.810s 00:11:17.365 sys 0m1.060s 00:11:17.365 ************************************ 00:11:17.365 END TEST raid_superblock_test 00:11:17.365 ************************************ 00:11:17.365 08:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.365 08:48:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.365 08:48:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:17.365 08:48:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:17.365 08:48:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.365 08:48:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.365 ************************************ 00:11:17.365 START TEST raid_read_error_test 00:11:17.365 ************************************ 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WO4K1ff2Nj 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72898 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72898 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72898 ']' 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:17.365 08:48:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.365 [2024-09-28 08:48:55.197830] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:17.365 [2024-09-28 08:48:55.197976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72898 ] 00:11:17.625 [2024-09-28 08:48:55.367456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.625 [2024-09-28 08:48:55.599764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.885 [2024-09-28 08:48:55.822697] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.885 [2024-09-28 08:48:55.822736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.143 BaseBdev1_malloc 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.143 true 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.143 [2024-09-28 08:48:56.084899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:18.143 [2024-09-28 08:48:56.084975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.143 [2024-09-28 08:48:56.084996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:18.143 [2024-09-28 08:48:56.085008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.143 [2024-09-28 08:48:56.087414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.143 [2024-09-28 08:48:56.087448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:18.143 BaseBdev1 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.143 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 BaseBdev2_malloc 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 true 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.401 [2024-09-28 08:48:56.165207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:18.401 [2024-09-28 08:48:56.165277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.401 [2024-09-28 08:48:56.165295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:18.401 [2024-09-28 08:48:56.165307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.401 [2024-09-28 08:48:56.167656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.401 [2024-09-28 08:48:56.167705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:18.401 BaseBdev2 00:11:18.401 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 BaseBdev3_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 true 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 [2024-09-28 08:48:56.237747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:18.402 [2024-09-28 08:48:56.237812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.402 [2024-09-28 08:48:56.237830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:18.402 [2024-09-28 08:48:56.237841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.402 [2024-09-28 08:48:56.240199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.402 [2024-09-28 08:48:56.240237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:18.402 BaseBdev3 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 BaseBdev4_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 true 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 [2024-09-28 08:48:56.310442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:18.402 [2024-09-28 08:48:56.310495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.402 [2024-09-28 08:48:56.310515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:18.402 [2024-09-28 08:48:56.310526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.402 [2024-09-28 08:48:56.312935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.402 [2024-09-28 08:48:56.312972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:18.402 BaseBdev4 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 [2024-09-28 08:48:56.322507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.402 [2024-09-28 08:48:56.324624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.402 [2024-09-28 08:48:56.324728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.402 [2024-09-28 08:48:56.324789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.402 [2024-09-28 08:48:56.325012] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:18.402 [2024-09-28 08:48:56.325035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.402 [2024-09-28 08:48:56.325287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.402 [2024-09-28 08:48:56.325462] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:18.402 [2024-09-28 08:48:56.325478] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:18.402 [2024-09-28 08:48:56.325642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.402 "name": "raid_bdev1", 00:11:18.402 "uuid": "9810bd57-2cf0-4e93-b240-134fb89e334f", 00:11:18.402 "strip_size_kb": 64, 00:11:18.402 "state": "online", 00:11:18.402 "raid_level": "concat", 00:11:18.402 "superblock": true, 00:11:18.402 "num_base_bdevs": 4, 00:11:18.402 "num_base_bdevs_discovered": 4, 00:11:18.402 "num_base_bdevs_operational": 4, 00:11:18.402 "base_bdevs_list": [ 00:11:18.402 { 00:11:18.402 "name": "BaseBdev1", 00:11:18.402 "uuid": "bc70447a-dbd5-5ce4-a485-c96257ef5d03", 00:11:18.402 "is_configured": true, 00:11:18.402 "data_offset": 2048, 00:11:18.402 "data_size": 63488 00:11:18.402 }, 00:11:18.402 { 00:11:18.402 "name": "BaseBdev2", 00:11:18.402 "uuid": "2c1ff1ab-af02-50a4-85dc-6b9e60409cfe", 00:11:18.402 "is_configured": true, 00:11:18.402 "data_offset": 2048, 00:11:18.402 "data_size": 63488 00:11:18.402 }, 00:11:18.402 { 00:11:18.402 "name": "BaseBdev3", 00:11:18.402 "uuid": "56f744f9-a0cf-5056-8221-1fc0d045111d", 00:11:18.402 "is_configured": true, 00:11:18.402 "data_offset": 2048, 00:11:18.402 "data_size": 63488 00:11:18.402 }, 00:11:18.402 { 00:11:18.402 "name": "BaseBdev4", 00:11:18.402 "uuid": "727ecb3c-0df8-5136-8fa3-4aaa1a00d910", 00:11:18.402 "is_configured": true, 00:11:18.402 "data_offset": 2048, 00:11:18.402 "data_size": 63488 00:11:18.402 } 00:11:18.402 ] 00:11:18.402 }' 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.402 08:48:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:18.971 08:48:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.971 [2024-09-28 08:48:56.787020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.910 "name": "raid_bdev1", 00:11:19.910 "uuid": "9810bd57-2cf0-4e93-b240-134fb89e334f", 00:11:19.910 "strip_size_kb": 64, 00:11:19.910 "state": "online", 00:11:19.910 "raid_level": "concat", 00:11:19.910 "superblock": true, 00:11:19.910 "num_base_bdevs": 4, 00:11:19.910 "num_base_bdevs_discovered": 4, 00:11:19.910 "num_base_bdevs_operational": 4, 00:11:19.910 "base_bdevs_list": [ 00:11:19.910 { 00:11:19.910 "name": "BaseBdev1", 00:11:19.910 "uuid": "bc70447a-dbd5-5ce4-a485-c96257ef5d03", 00:11:19.910 "is_configured": true, 00:11:19.910 "data_offset": 2048, 00:11:19.910 "data_size": 63488 00:11:19.910 }, 00:11:19.910 { 00:11:19.910 "name": "BaseBdev2", 00:11:19.910 "uuid": "2c1ff1ab-af02-50a4-85dc-6b9e60409cfe", 00:11:19.910 "is_configured": true, 00:11:19.910 "data_offset": 2048, 00:11:19.910 "data_size": 63488 00:11:19.910 }, 00:11:19.910 { 00:11:19.910 "name": "BaseBdev3", 00:11:19.910 "uuid": "56f744f9-a0cf-5056-8221-1fc0d045111d", 00:11:19.910 "is_configured": true, 00:11:19.910 "data_offset": 2048, 00:11:19.910 "data_size": 63488 00:11:19.910 }, 00:11:19.910 { 00:11:19.910 "name": "BaseBdev4", 00:11:19.910 "uuid": "727ecb3c-0df8-5136-8fa3-4aaa1a00d910", 00:11:19.910 "is_configured": true, 00:11:19.910 "data_offset": 2048, 00:11:19.910 "data_size": 63488 00:11:19.910 } 00:11:19.910 ] 00:11:19.910 }' 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.910 08:48:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.170 [2024-09-28 08:48:58.128236] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.170 [2024-09-28 08:48:58.128274] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.170 [2024-09-28 08:48:58.130854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.170 [2024-09-28 08:48:58.130934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.170 [2024-09-28 08:48:58.130984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.170 [2024-09-28 08:48:58.130997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:20.170 { 00:11:20.170 "results": [ 00:11:20.170 { 00:11:20.170 "job": "raid_bdev1", 00:11:20.170 "core_mask": "0x1", 00:11:20.170 "workload": "randrw", 00:11:20.170 "percentage": 50, 00:11:20.170 "status": "finished", 00:11:20.170 "queue_depth": 1, 00:11:20.170 "io_size": 131072, 00:11:20.170 "runtime": 1.34174, 00:11:20.170 "iops": 14208.415937513975, 00:11:20.170 "mibps": 1776.051992189247, 00:11:20.170 "io_failed": 1, 00:11:20.170 "io_timeout": 0, 00:11:20.170 "avg_latency_us": 99.29513251036158, 00:11:20.170 "min_latency_us": 25.2646288209607, 00:11:20.170 "max_latency_us": 1387.989519650655 00:11:20.170 } 00:11:20.170 ], 00:11:20.170 "core_count": 1 00:11:20.170 } 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72898 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72898 ']' 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72898 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72898 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.170 killing process with pid 72898 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72898' 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72898 00:11:20.170 [2024-09-28 08:48:58.161605] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.170 08:48:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72898 00:11:20.739 [2024-09-28 08:48:58.510052] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WO4K1ff2Nj 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:22.124 00:11:22.124 real 0m4.815s 00:11:22.124 user 0m5.411s 00:11:22.124 sys 0m0.719s 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.124 08:48:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.124 ************************************ 00:11:22.124 END TEST raid_read_error_test 00:11:22.124 ************************************ 00:11:22.124 08:48:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:22.124 08:48:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:22.124 08:48:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.124 08:48:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.124 ************************************ 00:11:22.124 START TEST raid_write_error_test 00:11:22.124 ************************************ 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RujOOVfBAq 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73044 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73044 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73044 ']' 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.124 08:48:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.124 [2024-09-28 08:49:00.086646] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:22.124 [2024-09-28 08:49:00.086811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73044 ] 00:11:22.384 [2024-09-28 08:49:00.254396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.643 [2024-09-28 08:49:00.504485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.903 [2024-09-28 08:49:00.737437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.903 [2024-09-28 08:49:00.737476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.163 BaseBdev1_malloc 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.163 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.163 true 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 [2024-09-28 08:49:00.979718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.164 [2024-09-28 08:49:00.979775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.164 [2024-09-28 08:49:00.979792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.164 [2024-09-28 08:49:00.979803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.164 [2024-09-28 08:49:00.982195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.164 [2024-09-28 08:49:00.982235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.164 BaseBdev1 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 BaseBdev2_malloc 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 true 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 [2024-09-28 08:49:01.081829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.164 [2024-09-28 08:49:01.081883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.164 [2024-09-28 08:49:01.081915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.164 [2024-09-28 08:49:01.081926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.164 [2024-09-28 08:49:01.084312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.164 [2024-09-28 08:49:01.084360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.164 BaseBdev2 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 BaseBdev3_malloc 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 true 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.164 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.164 [2024-09-28 08:49:01.153213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.164 [2024-09-28 08:49:01.153265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.164 [2024-09-28 08:49:01.153281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:23.164 [2024-09-28 08:49:01.153292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.164 [2024-09-28 08:49:01.155745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.164 [2024-09-28 08:49:01.155783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.164 BaseBdev3 00:11:23.423 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 BaseBdev4_malloc 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 true 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 [2024-09-28 08:49:01.226640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:23.424 [2024-09-28 08:49:01.226700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.424 [2024-09-28 08:49:01.226733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:23.424 [2024-09-28 08:49:01.226746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.424 [2024-09-28 08:49:01.229134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.424 [2024-09-28 08:49:01.229171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:23.424 BaseBdev4 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 [2024-09-28 08:49:01.238696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.424 [2024-09-28 08:49:01.240792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.424 [2024-09-28 08:49:01.240866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.424 [2024-09-28 08:49:01.240924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:23.424 [2024-09-28 08:49:01.241150] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:23.424 [2024-09-28 08:49:01.241172] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.424 [2024-09-28 08:49:01.241408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:23.424 [2024-09-28 08:49:01.241576] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:23.424 [2024-09-28 08:49:01.241592] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:23.424 [2024-09-28 08:49:01.241751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.424 "name": "raid_bdev1", 00:11:23.424 "uuid": "cda2bb20-0190-456b-92c7-7bb0430df0b7", 00:11:23.424 "strip_size_kb": 64, 00:11:23.424 "state": "online", 00:11:23.424 "raid_level": "concat", 00:11:23.424 "superblock": true, 00:11:23.424 "num_base_bdevs": 4, 00:11:23.424 "num_base_bdevs_discovered": 4, 00:11:23.424 "num_base_bdevs_operational": 4, 00:11:23.424 "base_bdevs_list": [ 00:11:23.424 { 00:11:23.424 "name": "BaseBdev1", 00:11:23.424 "uuid": "6847e655-6f83-5ec9-af35-44b05fb85b0c", 00:11:23.424 "is_configured": true, 00:11:23.424 "data_offset": 2048, 00:11:23.424 "data_size": 63488 00:11:23.424 }, 00:11:23.424 { 00:11:23.424 "name": "BaseBdev2", 00:11:23.424 "uuid": "e780f718-9bae-5f7c-ad8d-3133f5b5b738", 00:11:23.424 "is_configured": true, 00:11:23.424 "data_offset": 2048, 00:11:23.424 "data_size": 63488 00:11:23.424 }, 00:11:23.424 { 00:11:23.424 "name": "BaseBdev3", 00:11:23.424 "uuid": "093b38a9-77c3-5b1d-9002-12139851abc0", 00:11:23.424 "is_configured": true, 00:11:23.424 "data_offset": 2048, 00:11:23.424 "data_size": 63488 00:11:23.424 }, 00:11:23.424 { 00:11:23.424 "name": "BaseBdev4", 00:11:23.424 "uuid": "0077f0a1-5a60-5e4d-921d-5cab47da1249", 00:11:23.424 "is_configured": true, 00:11:23.424 "data_offset": 2048, 00:11:23.424 "data_size": 63488 00:11:23.424 } 00:11:23.424 ] 00:11:23.424 }' 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.424 08:49:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.999 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:23.999 08:49:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.999 [2024-09-28 08:49:01.799164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.946 "name": "raid_bdev1", 00:11:24.946 "uuid": "cda2bb20-0190-456b-92c7-7bb0430df0b7", 00:11:24.946 "strip_size_kb": 64, 00:11:24.946 "state": "online", 00:11:24.946 "raid_level": "concat", 00:11:24.946 "superblock": true, 00:11:24.946 "num_base_bdevs": 4, 00:11:24.946 "num_base_bdevs_discovered": 4, 00:11:24.946 "num_base_bdevs_operational": 4, 00:11:24.946 "base_bdevs_list": [ 00:11:24.946 { 00:11:24.946 "name": "BaseBdev1", 00:11:24.946 "uuid": "6847e655-6f83-5ec9-af35-44b05fb85b0c", 00:11:24.946 "is_configured": true, 00:11:24.946 "data_offset": 2048, 00:11:24.946 "data_size": 63488 00:11:24.946 }, 00:11:24.946 { 00:11:24.946 "name": "BaseBdev2", 00:11:24.946 "uuid": "e780f718-9bae-5f7c-ad8d-3133f5b5b738", 00:11:24.946 "is_configured": true, 00:11:24.946 "data_offset": 2048, 00:11:24.946 "data_size": 63488 00:11:24.946 }, 00:11:24.946 { 00:11:24.946 "name": "BaseBdev3", 00:11:24.946 "uuid": "093b38a9-77c3-5b1d-9002-12139851abc0", 00:11:24.946 "is_configured": true, 00:11:24.946 "data_offset": 2048, 00:11:24.946 "data_size": 63488 00:11:24.946 }, 00:11:24.946 { 00:11:24.946 "name": "BaseBdev4", 00:11:24.946 "uuid": "0077f0a1-5a60-5e4d-921d-5cab47da1249", 00:11:24.946 "is_configured": true, 00:11:24.946 "data_offset": 2048, 00:11:24.946 "data_size": 63488 00:11:24.946 } 00:11:24.946 ] 00:11:24.946 }' 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.946 08:49:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.205 [2024-09-28 08:49:03.151964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.205 [2024-09-28 08:49:03.152001] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.205 [2024-09-28 08:49:03.154665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.205 [2024-09-28 08:49:03.154732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.205 [2024-09-28 08:49:03.154779] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.205 [2024-09-28 08:49:03.154792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:25.205 { 00:11:25.205 "results": [ 00:11:25.205 { 00:11:25.205 "job": "raid_bdev1", 00:11:25.205 "core_mask": "0x1", 00:11:25.205 "workload": "randrw", 00:11:25.205 "percentage": 50, 00:11:25.205 "status": "finished", 00:11:25.205 "queue_depth": 1, 00:11:25.205 "io_size": 131072, 00:11:25.205 "runtime": 1.353291, 00:11:25.205 "iops": 14193.547433626618, 00:11:25.205 "mibps": 1774.1934292033272, 00:11:25.205 "io_failed": 1, 00:11:25.205 "io_timeout": 0, 00:11:25.205 "avg_latency_us": 99.26844317199384, 00:11:25.205 "min_latency_us": 25.041048034934498, 00:11:25.205 "max_latency_us": 1387.989519650655 00:11:25.205 } 00:11:25.205 ], 00:11:25.205 "core_count": 1 00:11:25.205 } 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73044 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73044 ']' 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73044 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.205 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73044 00:11:25.465 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:25.465 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:25.465 killing process with pid 73044 00:11:25.465 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73044' 00:11:25.465 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73044 00:11:25.465 [2024-09-28 08:49:03.201819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.465 08:49:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73044 00:11:25.724 [2024-09-28 08:49:03.542909] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RujOOVfBAq 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:27.106 00:11:27.106 real 0m4.969s 00:11:27.106 user 0m5.638s 00:11:27.106 sys 0m0.742s 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.106 08:49:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.106 ************************************ 00:11:27.106 END TEST raid_write_error_test 00:11:27.106 ************************************ 00:11:27.106 08:49:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:27.106 08:49:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:27.106 08:49:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:27.106 08:49:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.106 08:49:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.106 ************************************ 00:11:27.106 START TEST raid_state_function_test 00:11:27.106 ************************************ 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73193 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73193' 00:11:27.106 Process raid pid: 73193 00:11:27.106 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73193 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73193 ']' 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.107 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.366 [2024-09-28 08:49:05.119037] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:27.366 [2024-09-28 08:49:05.119193] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.366 [2024-09-28 08:49:05.287143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.625 [2024-09-28 08:49:05.531248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.884 [2024-09-28 08:49:05.764670] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.884 [2024-09-28 08:49:05.764708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.143 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.143 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:28.143 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.143 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.143 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.143 [2024-09-28 08:49:05.948024] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.143 [2024-09-28 08:49:05.948075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.143 [2024-09-28 08:49:05.948086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.143 [2024-09-28 08:49:05.948095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.143 [2024-09-28 08:49:05.948101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.144 [2024-09-28 08:49:05.948112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.144 [2024-09-28 08:49:05.948118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.144 [2024-09-28 08:49:05.948127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 08:49:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.144 "name": "Existed_Raid", 00:11:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.144 "strip_size_kb": 0, 00:11:28.144 "state": "configuring", 00:11:28.144 "raid_level": "raid1", 00:11:28.144 "superblock": false, 00:11:28.144 "num_base_bdevs": 4, 00:11:28.144 "num_base_bdevs_discovered": 0, 00:11:28.144 "num_base_bdevs_operational": 4, 00:11:28.144 "base_bdevs_list": [ 00:11:28.144 { 00:11:28.144 "name": "BaseBdev1", 00:11:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.144 "is_configured": false, 00:11:28.144 "data_offset": 0, 00:11:28.144 "data_size": 0 00:11:28.144 }, 00:11:28.144 { 00:11:28.144 "name": "BaseBdev2", 00:11:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.144 "is_configured": false, 00:11:28.144 "data_offset": 0, 00:11:28.144 "data_size": 0 00:11:28.144 }, 00:11:28.144 { 00:11:28.144 "name": "BaseBdev3", 00:11:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.144 "is_configured": false, 00:11:28.144 "data_offset": 0, 00:11:28.144 "data_size": 0 00:11:28.144 }, 00:11:28.144 { 00:11:28.144 "name": "BaseBdev4", 00:11:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.144 "is_configured": false, 00:11:28.144 "data_offset": 0, 00:11:28.144 "data_size": 0 00:11:28.144 } 00:11:28.144 ] 00:11:28.144 }' 00:11:28.144 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.144 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.403 [2024-09-28 08:49:06.367244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.403 [2024-09-28 08:49:06.367292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.403 [2024-09-28 08:49:06.379262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.403 [2024-09-28 08:49:06.379303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.403 [2024-09-28 08:49:06.379313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.403 [2024-09-28 08:49:06.379323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.403 [2024-09-28 08:49:06.379329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.403 [2024-09-28 08:49:06.379339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.403 [2024-09-28 08:49:06.379345] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.403 [2024-09-28 08:49:06.379354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.403 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 [2024-09-28 08:49:06.461864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.662 BaseBdev1 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.662 [ 00:11:28.662 { 00:11:28.662 "name": "BaseBdev1", 00:11:28.662 "aliases": [ 00:11:28.662 "3348ca55-243c-4a69-8b78-14360ddf2758" 00:11:28.662 ], 00:11:28.662 "product_name": "Malloc disk", 00:11:28.662 "block_size": 512, 00:11:28.662 "num_blocks": 65536, 00:11:28.662 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:28.662 "assigned_rate_limits": { 00:11:28.662 "rw_ios_per_sec": 0, 00:11:28.662 "rw_mbytes_per_sec": 0, 00:11:28.662 "r_mbytes_per_sec": 0, 00:11:28.662 "w_mbytes_per_sec": 0 00:11:28.662 }, 00:11:28.662 "claimed": true, 00:11:28.662 "claim_type": "exclusive_write", 00:11:28.662 "zoned": false, 00:11:28.662 "supported_io_types": { 00:11:28.662 "read": true, 00:11:28.662 "write": true, 00:11:28.662 "unmap": true, 00:11:28.662 "flush": true, 00:11:28.662 "reset": true, 00:11:28.662 "nvme_admin": false, 00:11:28.662 "nvme_io": false, 00:11:28.662 "nvme_io_md": false, 00:11:28.662 "write_zeroes": true, 00:11:28.662 "zcopy": true, 00:11:28.662 "get_zone_info": false, 00:11:28.662 "zone_management": false, 00:11:28.662 "zone_append": false, 00:11:28.662 "compare": false, 00:11:28.662 "compare_and_write": false, 00:11:28.662 "abort": true, 00:11:28.662 "seek_hole": false, 00:11:28.662 "seek_data": false, 00:11:28.662 "copy": true, 00:11:28.662 "nvme_iov_md": false 00:11:28.662 }, 00:11:28.662 "memory_domains": [ 00:11:28.662 { 00:11:28.662 "dma_device_id": "system", 00:11:28.662 "dma_device_type": 1 00:11:28.662 }, 00:11:28.662 { 00:11:28.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.662 "dma_device_type": 2 00:11:28.662 } 00:11:28.662 ], 00:11:28.662 "driver_specific": {} 00:11:28.662 } 00:11:28.662 ] 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:28.662 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.663 "name": "Existed_Raid", 00:11:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.663 "strip_size_kb": 0, 00:11:28.663 "state": "configuring", 00:11:28.663 "raid_level": "raid1", 00:11:28.663 "superblock": false, 00:11:28.663 "num_base_bdevs": 4, 00:11:28.663 "num_base_bdevs_discovered": 1, 00:11:28.663 "num_base_bdevs_operational": 4, 00:11:28.663 "base_bdevs_list": [ 00:11:28.663 { 00:11:28.663 "name": "BaseBdev1", 00:11:28.663 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:28.663 "is_configured": true, 00:11:28.663 "data_offset": 0, 00:11:28.663 "data_size": 65536 00:11:28.663 }, 00:11:28.663 { 00:11:28.663 "name": "BaseBdev2", 00:11:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.663 "is_configured": false, 00:11:28.663 "data_offset": 0, 00:11:28.663 "data_size": 0 00:11:28.663 }, 00:11:28.663 { 00:11:28.663 "name": "BaseBdev3", 00:11:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.663 "is_configured": false, 00:11:28.663 "data_offset": 0, 00:11:28.663 "data_size": 0 00:11:28.663 }, 00:11:28.663 { 00:11:28.663 "name": "BaseBdev4", 00:11:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.663 "is_configured": false, 00:11:28.663 "data_offset": 0, 00:11:28.663 "data_size": 0 00:11:28.663 } 00:11:28.663 ] 00:11:28.663 }' 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.663 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.232 [2024-09-28 08:49:06.929077] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.232 [2024-09-28 08:49:06.929154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.232 [2024-09-28 08:49:06.941105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.232 [2024-09-28 08:49:06.943234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.232 [2024-09-28 08:49:06.943278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.232 [2024-09-28 08:49:06.943288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.232 [2024-09-28 08:49:06.943298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.232 [2024-09-28 08:49:06.943305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:29.232 [2024-09-28 08:49:06.943314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.232 "name": "Existed_Raid", 00:11:29.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.232 "strip_size_kb": 0, 00:11:29.232 "state": "configuring", 00:11:29.232 "raid_level": "raid1", 00:11:29.232 "superblock": false, 00:11:29.232 "num_base_bdevs": 4, 00:11:29.232 "num_base_bdevs_discovered": 1, 00:11:29.232 "num_base_bdevs_operational": 4, 00:11:29.232 "base_bdevs_list": [ 00:11:29.232 { 00:11:29.232 "name": "BaseBdev1", 00:11:29.232 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:29.232 "is_configured": true, 00:11:29.232 "data_offset": 0, 00:11:29.232 "data_size": 65536 00:11:29.232 }, 00:11:29.232 { 00:11:29.232 "name": "BaseBdev2", 00:11:29.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.232 "is_configured": false, 00:11:29.232 "data_offset": 0, 00:11:29.232 "data_size": 0 00:11:29.232 }, 00:11:29.232 { 00:11:29.232 "name": "BaseBdev3", 00:11:29.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.232 "is_configured": false, 00:11:29.232 "data_offset": 0, 00:11:29.232 "data_size": 0 00:11:29.232 }, 00:11:29.232 { 00:11:29.232 "name": "BaseBdev4", 00:11:29.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.232 "is_configured": false, 00:11:29.232 "data_offset": 0, 00:11:29.232 "data_size": 0 00:11:29.232 } 00:11:29.232 ] 00:11:29.232 }' 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.232 08:49:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.492 [2024-09-28 08:49:07.452762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.492 BaseBdev2 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.492 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.492 [ 00:11:29.492 { 00:11:29.492 "name": "BaseBdev2", 00:11:29.492 "aliases": [ 00:11:29.492 "dbbf35e8-e5b4-4d07-98fa-fb218911eacb" 00:11:29.492 ], 00:11:29.492 "product_name": "Malloc disk", 00:11:29.492 "block_size": 512, 00:11:29.492 "num_blocks": 65536, 00:11:29.492 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:29.492 "assigned_rate_limits": { 00:11:29.492 "rw_ios_per_sec": 0, 00:11:29.492 "rw_mbytes_per_sec": 0, 00:11:29.492 "r_mbytes_per_sec": 0, 00:11:29.492 "w_mbytes_per_sec": 0 00:11:29.492 }, 00:11:29.492 "claimed": true, 00:11:29.492 "claim_type": "exclusive_write", 00:11:29.492 "zoned": false, 00:11:29.492 "supported_io_types": { 00:11:29.492 "read": true, 00:11:29.492 "write": true, 00:11:29.492 "unmap": true, 00:11:29.492 "flush": true, 00:11:29.492 "reset": true, 00:11:29.492 "nvme_admin": false, 00:11:29.492 "nvme_io": false, 00:11:29.492 "nvme_io_md": false, 00:11:29.492 "write_zeroes": true, 00:11:29.492 "zcopy": true, 00:11:29.492 "get_zone_info": false, 00:11:29.492 "zone_management": false, 00:11:29.492 "zone_append": false, 00:11:29.492 "compare": false, 00:11:29.492 "compare_and_write": false, 00:11:29.492 "abort": true, 00:11:29.492 "seek_hole": false, 00:11:29.492 "seek_data": false, 00:11:29.492 "copy": true, 00:11:29.492 "nvme_iov_md": false 00:11:29.492 }, 00:11:29.492 "memory_domains": [ 00:11:29.492 { 00:11:29.752 "dma_device_id": "system", 00:11:29.752 "dma_device_type": 1 00:11:29.752 }, 00:11:29.752 { 00:11:29.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.752 "dma_device_type": 2 00:11:29.752 } 00:11:29.752 ], 00:11:29.752 "driver_specific": {} 00:11:29.752 } 00:11:29.752 ] 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.752 "name": "Existed_Raid", 00:11:29.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.752 "strip_size_kb": 0, 00:11:29.752 "state": "configuring", 00:11:29.752 "raid_level": "raid1", 00:11:29.752 "superblock": false, 00:11:29.752 "num_base_bdevs": 4, 00:11:29.752 "num_base_bdevs_discovered": 2, 00:11:29.752 "num_base_bdevs_operational": 4, 00:11:29.752 "base_bdevs_list": [ 00:11:29.752 { 00:11:29.752 "name": "BaseBdev1", 00:11:29.752 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:29.752 "is_configured": true, 00:11:29.752 "data_offset": 0, 00:11:29.752 "data_size": 65536 00:11:29.752 }, 00:11:29.752 { 00:11:29.752 "name": "BaseBdev2", 00:11:29.752 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:29.752 "is_configured": true, 00:11:29.752 "data_offset": 0, 00:11:29.752 "data_size": 65536 00:11:29.752 }, 00:11:29.752 { 00:11:29.752 "name": "BaseBdev3", 00:11:29.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.752 "is_configured": false, 00:11:29.752 "data_offset": 0, 00:11:29.752 "data_size": 0 00:11:29.752 }, 00:11:29.752 { 00:11:29.752 "name": "BaseBdev4", 00:11:29.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.752 "is_configured": false, 00:11:29.752 "data_offset": 0, 00:11:29.752 "data_size": 0 00:11:29.752 } 00:11:29.752 ] 00:11:29.752 }' 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.752 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.011 [2024-09-28 08:49:07.963054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.011 BaseBdev3 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.011 [ 00:11:30.011 { 00:11:30.011 "name": "BaseBdev3", 00:11:30.011 "aliases": [ 00:11:30.011 "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9" 00:11:30.011 ], 00:11:30.011 "product_name": "Malloc disk", 00:11:30.011 "block_size": 512, 00:11:30.011 "num_blocks": 65536, 00:11:30.011 "uuid": "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9", 00:11:30.011 "assigned_rate_limits": { 00:11:30.011 "rw_ios_per_sec": 0, 00:11:30.011 "rw_mbytes_per_sec": 0, 00:11:30.011 "r_mbytes_per_sec": 0, 00:11:30.011 "w_mbytes_per_sec": 0 00:11:30.011 }, 00:11:30.011 "claimed": true, 00:11:30.011 "claim_type": "exclusive_write", 00:11:30.011 "zoned": false, 00:11:30.011 "supported_io_types": { 00:11:30.011 "read": true, 00:11:30.011 "write": true, 00:11:30.011 "unmap": true, 00:11:30.011 "flush": true, 00:11:30.011 "reset": true, 00:11:30.011 "nvme_admin": false, 00:11:30.011 "nvme_io": false, 00:11:30.011 "nvme_io_md": false, 00:11:30.011 "write_zeroes": true, 00:11:30.011 "zcopy": true, 00:11:30.011 "get_zone_info": false, 00:11:30.011 "zone_management": false, 00:11:30.011 "zone_append": false, 00:11:30.011 "compare": false, 00:11:30.011 "compare_and_write": false, 00:11:30.011 "abort": true, 00:11:30.011 "seek_hole": false, 00:11:30.011 "seek_data": false, 00:11:30.011 "copy": true, 00:11:30.011 "nvme_iov_md": false 00:11:30.011 }, 00:11:30.011 "memory_domains": [ 00:11:30.011 { 00:11:30.011 "dma_device_id": "system", 00:11:30.011 "dma_device_type": 1 00:11:30.011 }, 00:11:30.011 { 00:11:30.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.011 "dma_device_type": 2 00:11:30.011 } 00:11:30.011 ], 00:11:30.011 "driver_specific": {} 00:11:30.011 } 00:11:30.011 ] 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.011 08:49:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.011 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.268 "name": "Existed_Raid", 00:11:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.268 "strip_size_kb": 0, 00:11:30.268 "state": "configuring", 00:11:30.268 "raid_level": "raid1", 00:11:30.268 "superblock": false, 00:11:30.268 "num_base_bdevs": 4, 00:11:30.268 "num_base_bdevs_discovered": 3, 00:11:30.268 "num_base_bdevs_operational": 4, 00:11:30.268 "base_bdevs_list": [ 00:11:30.268 { 00:11:30.268 "name": "BaseBdev1", 00:11:30.268 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:30.268 "is_configured": true, 00:11:30.268 "data_offset": 0, 00:11:30.268 "data_size": 65536 00:11:30.268 }, 00:11:30.268 { 00:11:30.268 "name": "BaseBdev2", 00:11:30.268 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:30.268 "is_configured": true, 00:11:30.268 "data_offset": 0, 00:11:30.268 "data_size": 65536 00:11:30.268 }, 00:11:30.268 { 00:11:30.268 "name": "BaseBdev3", 00:11:30.268 "uuid": "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9", 00:11:30.268 "is_configured": true, 00:11:30.268 "data_offset": 0, 00:11:30.268 "data_size": 65536 00:11:30.268 }, 00:11:30.268 { 00:11:30.268 "name": "BaseBdev4", 00:11:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.268 "is_configured": false, 00:11:30.268 "data_offset": 0, 00:11:30.268 "data_size": 0 00:11:30.268 } 00:11:30.268 ] 00:11:30.268 }' 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.268 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.526 [2024-09-28 08:49:08.489274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.526 [2024-09-28 08:49:08.489447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.526 [2024-09-28 08:49:08.489466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.526 [2024-09-28 08:49:08.489834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:30.526 [2024-09-28 08:49:08.490043] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.526 [2024-09-28 08:49:08.490058] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:30.526 [2024-09-28 08:49:08.490353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.526 BaseBdev4 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.526 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.526 [ 00:11:30.786 { 00:11:30.786 "name": "BaseBdev4", 00:11:30.786 "aliases": [ 00:11:30.786 "f8066b43-85ed-4378-ad1b-c7982bfcab79" 00:11:30.786 ], 00:11:30.786 "product_name": "Malloc disk", 00:11:30.786 "block_size": 512, 00:11:30.786 "num_blocks": 65536, 00:11:30.786 "uuid": "f8066b43-85ed-4378-ad1b-c7982bfcab79", 00:11:30.786 "assigned_rate_limits": { 00:11:30.786 "rw_ios_per_sec": 0, 00:11:30.786 "rw_mbytes_per_sec": 0, 00:11:30.786 "r_mbytes_per_sec": 0, 00:11:30.786 "w_mbytes_per_sec": 0 00:11:30.786 }, 00:11:30.786 "claimed": true, 00:11:30.786 "claim_type": "exclusive_write", 00:11:30.786 "zoned": false, 00:11:30.786 "supported_io_types": { 00:11:30.786 "read": true, 00:11:30.786 "write": true, 00:11:30.786 "unmap": true, 00:11:30.786 "flush": true, 00:11:30.786 "reset": true, 00:11:30.786 "nvme_admin": false, 00:11:30.786 "nvme_io": false, 00:11:30.786 "nvme_io_md": false, 00:11:30.786 "write_zeroes": true, 00:11:30.786 "zcopy": true, 00:11:30.786 "get_zone_info": false, 00:11:30.786 "zone_management": false, 00:11:30.786 "zone_append": false, 00:11:30.786 "compare": false, 00:11:30.786 "compare_and_write": false, 00:11:30.786 "abort": true, 00:11:30.786 "seek_hole": false, 00:11:30.786 "seek_data": false, 00:11:30.786 "copy": true, 00:11:30.786 "nvme_iov_md": false 00:11:30.786 }, 00:11:30.786 "memory_domains": [ 00:11:30.786 { 00:11:30.786 "dma_device_id": "system", 00:11:30.786 "dma_device_type": 1 00:11:30.786 }, 00:11:30.786 { 00:11:30.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.786 "dma_device_type": 2 00:11:30.786 } 00:11:30.786 ], 00:11:30.786 "driver_specific": {} 00:11:30.786 } 00:11:30.786 ] 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.786 "name": "Existed_Raid", 00:11:30.786 "uuid": "829eba60-96fa-4d7e-a1f1-3a2988d9539f", 00:11:30.786 "strip_size_kb": 0, 00:11:30.786 "state": "online", 00:11:30.786 "raid_level": "raid1", 00:11:30.786 "superblock": false, 00:11:30.786 "num_base_bdevs": 4, 00:11:30.786 "num_base_bdevs_discovered": 4, 00:11:30.786 "num_base_bdevs_operational": 4, 00:11:30.786 "base_bdevs_list": [ 00:11:30.786 { 00:11:30.786 "name": "BaseBdev1", 00:11:30.786 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:30.786 "is_configured": true, 00:11:30.786 "data_offset": 0, 00:11:30.786 "data_size": 65536 00:11:30.786 }, 00:11:30.786 { 00:11:30.786 "name": "BaseBdev2", 00:11:30.786 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:30.786 "is_configured": true, 00:11:30.786 "data_offset": 0, 00:11:30.786 "data_size": 65536 00:11:30.786 }, 00:11:30.786 { 00:11:30.786 "name": "BaseBdev3", 00:11:30.786 "uuid": "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9", 00:11:30.786 "is_configured": true, 00:11:30.786 "data_offset": 0, 00:11:30.786 "data_size": 65536 00:11:30.786 }, 00:11:30.786 { 00:11:30.786 "name": "BaseBdev4", 00:11:30.786 "uuid": "f8066b43-85ed-4378-ad1b-c7982bfcab79", 00:11:30.786 "is_configured": true, 00:11:30.786 "data_offset": 0, 00:11:30.786 "data_size": 65536 00:11:30.786 } 00:11:30.786 ] 00:11:30.786 }' 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.786 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.047 [2024-09-28 08:49:08.920887] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.047 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.047 "name": "Existed_Raid", 00:11:31.047 "aliases": [ 00:11:31.047 "829eba60-96fa-4d7e-a1f1-3a2988d9539f" 00:11:31.047 ], 00:11:31.047 "product_name": "Raid Volume", 00:11:31.047 "block_size": 512, 00:11:31.047 "num_blocks": 65536, 00:11:31.047 "uuid": "829eba60-96fa-4d7e-a1f1-3a2988d9539f", 00:11:31.047 "assigned_rate_limits": { 00:11:31.047 "rw_ios_per_sec": 0, 00:11:31.047 "rw_mbytes_per_sec": 0, 00:11:31.047 "r_mbytes_per_sec": 0, 00:11:31.047 "w_mbytes_per_sec": 0 00:11:31.047 }, 00:11:31.047 "claimed": false, 00:11:31.047 "zoned": false, 00:11:31.047 "supported_io_types": { 00:11:31.047 "read": true, 00:11:31.047 "write": true, 00:11:31.047 "unmap": false, 00:11:31.047 "flush": false, 00:11:31.047 "reset": true, 00:11:31.047 "nvme_admin": false, 00:11:31.047 "nvme_io": false, 00:11:31.047 "nvme_io_md": false, 00:11:31.047 "write_zeroes": true, 00:11:31.047 "zcopy": false, 00:11:31.047 "get_zone_info": false, 00:11:31.047 "zone_management": false, 00:11:31.048 "zone_append": false, 00:11:31.048 "compare": false, 00:11:31.048 "compare_and_write": false, 00:11:31.048 "abort": false, 00:11:31.048 "seek_hole": false, 00:11:31.048 "seek_data": false, 00:11:31.048 "copy": false, 00:11:31.048 "nvme_iov_md": false 00:11:31.048 }, 00:11:31.048 "memory_domains": [ 00:11:31.048 { 00:11:31.048 "dma_device_id": "system", 00:11:31.048 "dma_device_type": 1 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.048 "dma_device_type": 2 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "system", 00:11:31.048 "dma_device_type": 1 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.048 "dma_device_type": 2 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "system", 00:11:31.048 "dma_device_type": 1 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.048 "dma_device_type": 2 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "system", 00:11:31.048 "dma_device_type": 1 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.048 "dma_device_type": 2 00:11:31.048 } 00:11:31.048 ], 00:11:31.048 "driver_specific": { 00:11:31.048 "raid": { 00:11:31.048 "uuid": "829eba60-96fa-4d7e-a1f1-3a2988d9539f", 00:11:31.048 "strip_size_kb": 0, 00:11:31.048 "state": "online", 00:11:31.048 "raid_level": "raid1", 00:11:31.048 "superblock": false, 00:11:31.048 "num_base_bdevs": 4, 00:11:31.048 "num_base_bdevs_discovered": 4, 00:11:31.048 "num_base_bdevs_operational": 4, 00:11:31.048 "base_bdevs_list": [ 00:11:31.048 { 00:11:31.048 "name": "BaseBdev1", 00:11:31.048 "uuid": "3348ca55-243c-4a69-8b78-14360ddf2758", 00:11:31.048 "is_configured": true, 00:11:31.048 "data_offset": 0, 00:11:31.048 "data_size": 65536 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "name": "BaseBdev2", 00:11:31.048 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:31.048 "is_configured": true, 00:11:31.048 "data_offset": 0, 00:11:31.048 "data_size": 65536 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "name": "BaseBdev3", 00:11:31.048 "uuid": "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9", 00:11:31.048 "is_configured": true, 00:11:31.048 "data_offset": 0, 00:11:31.048 "data_size": 65536 00:11:31.048 }, 00:11:31.048 { 00:11:31.048 "name": "BaseBdev4", 00:11:31.048 "uuid": "f8066b43-85ed-4378-ad1b-c7982bfcab79", 00:11:31.048 "is_configured": true, 00:11:31.048 "data_offset": 0, 00:11:31.048 "data_size": 65536 00:11:31.048 } 00:11:31.048 ] 00:11:31.048 } 00:11:31.048 } 00:11:31.048 }' 00:11:31.048 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.048 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:31.048 BaseBdev2 00:11:31.048 BaseBdev3 00:11:31.048 BaseBdev4' 00:11:31.048 08:49:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.048 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.308 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.308 [2024-09-28 08:49:09.220092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.567 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.568 "name": "Existed_Raid", 00:11:31.568 "uuid": "829eba60-96fa-4d7e-a1f1-3a2988d9539f", 00:11:31.568 "strip_size_kb": 0, 00:11:31.568 "state": "online", 00:11:31.568 "raid_level": "raid1", 00:11:31.568 "superblock": false, 00:11:31.568 "num_base_bdevs": 4, 00:11:31.568 "num_base_bdevs_discovered": 3, 00:11:31.568 "num_base_bdevs_operational": 3, 00:11:31.568 "base_bdevs_list": [ 00:11:31.568 { 00:11:31.568 "name": null, 00:11:31.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.568 "is_configured": false, 00:11:31.568 "data_offset": 0, 00:11:31.568 "data_size": 65536 00:11:31.568 }, 00:11:31.568 { 00:11:31.568 "name": "BaseBdev2", 00:11:31.568 "uuid": "dbbf35e8-e5b4-4d07-98fa-fb218911eacb", 00:11:31.568 "is_configured": true, 00:11:31.568 "data_offset": 0, 00:11:31.568 "data_size": 65536 00:11:31.568 }, 00:11:31.568 { 00:11:31.568 "name": "BaseBdev3", 00:11:31.568 "uuid": "b9ae02ab-7081-4341-9fe4-ac1c299d3ac9", 00:11:31.568 "is_configured": true, 00:11:31.568 "data_offset": 0, 00:11:31.568 "data_size": 65536 00:11:31.568 }, 00:11:31.568 { 00:11:31.568 "name": "BaseBdev4", 00:11:31.568 "uuid": "f8066b43-85ed-4378-ad1b-c7982bfcab79", 00:11:31.568 "is_configured": true, 00:11:31.568 "data_offset": 0, 00:11:31.568 "data_size": 65536 00:11:31.568 } 00:11:31.568 ] 00:11:31.568 }' 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.568 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.827 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.827 [2024-09-28 08:49:09.795316] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.087 08:49:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.087 [2024-09-28 08:49:09.955518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.087 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.346 [2024-09-28 08:49:10.114878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:32.346 [2024-09-28 08:49:10.115028] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.346 [2024-09-28 08:49:10.215410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.346 [2024-09-28 08:49:10.215573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.346 [2024-09-28 08:49:10.215617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.346 BaseBdev2 00:11:32.346 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.347 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.347 [ 00:11:32.347 { 00:11:32.347 "name": "BaseBdev2", 00:11:32.347 "aliases": [ 00:11:32.347 "153ea208-444e-4138-9fbc-6f89886dc0f9" 00:11:32.347 ], 00:11:32.347 "product_name": "Malloc disk", 00:11:32.347 "block_size": 512, 00:11:32.347 "num_blocks": 65536, 00:11:32.347 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:32.347 "assigned_rate_limits": { 00:11:32.347 "rw_ios_per_sec": 0, 00:11:32.347 "rw_mbytes_per_sec": 0, 00:11:32.347 "r_mbytes_per_sec": 0, 00:11:32.606 "w_mbytes_per_sec": 0 00:11:32.606 }, 00:11:32.606 "claimed": false, 00:11:32.607 "zoned": false, 00:11:32.607 "supported_io_types": { 00:11:32.607 "read": true, 00:11:32.607 "write": true, 00:11:32.607 "unmap": true, 00:11:32.607 "flush": true, 00:11:32.607 "reset": true, 00:11:32.607 "nvme_admin": false, 00:11:32.607 "nvme_io": false, 00:11:32.607 "nvme_io_md": false, 00:11:32.607 "write_zeroes": true, 00:11:32.607 "zcopy": true, 00:11:32.607 "get_zone_info": false, 00:11:32.607 "zone_management": false, 00:11:32.607 "zone_append": false, 00:11:32.607 "compare": false, 00:11:32.607 "compare_and_write": false, 00:11:32.607 "abort": true, 00:11:32.607 "seek_hole": false, 00:11:32.607 "seek_data": false, 00:11:32.607 "copy": true, 00:11:32.607 "nvme_iov_md": false 00:11:32.607 }, 00:11:32.607 "memory_domains": [ 00:11:32.607 { 00:11:32.607 "dma_device_id": "system", 00:11:32.607 "dma_device_type": 1 00:11:32.607 }, 00:11:32.607 { 00:11:32.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.607 "dma_device_type": 2 00:11:32.607 } 00:11:32.607 ], 00:11:32.607 "driver_specific": {} 00:11:32.607 } 00:11:32.607 ] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 BaseBdev3 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 [ 00:11:32.607 { 00:11:32.607 "name": "BaseBdev3", 00:11:32.607 "aliases": [ 00:11:32.607 "5885636c-b178-496f-bcb7-d3cc6cd93816" 00:11:32.607 ], 00:11:32.607 "product_name": "Malloc disk", 00:11:32.607 "block_size": 512, 00:11:32.607 "num_blocks": 65536, 00:11:32.607 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:32.607 "assigned_rate_limits": { 00:11:32.607 "rw_ios_per_sec": 0, 00:11:32.607 "rw_mbytes_per_sec": 0, 00:11:32.607 "r_mbytes_per_sec": 0, 00:11:32.607 "w_mbytes_per_sec": 0 00:11:32.607 }, 00:11:32.607 "claimed": false, 00:11:32.607 "zoned": false, 00:11:32.607 "supported_io_types": { 00:11:32.607 "read": true, 00:11:32.607 "write": true, 00:11:32.607 "unmap": true, 00:11:32.607 "flush": true, 00:11:32.607 "reset": true, 00:11:32.607 "nvme_admin": false, 00:11:32.607 "nvme_io": false, 00:11:32.607 "nvme_io_md": false, 00:11:32.607 "write_zeroes": true, 00:11:32.607 "zcopy": true, 00:11:32.607 "get_zone_info": false, 00:11:32.607 "zone_management": false, 00:11:32.607 "zone_append": false, 00:11:32.607 "compare": false, 00:11:32.607 "compare_and_write": false, 00:11:32.607 "abort": true, 00:11:32.607 "seek_hole": false, 00:11:32.607 "seek_data": false, 00:11:32.607 "copy": true, 00:11:32.607 "nvme_iov_md": false 00:11:32.607 }, 00:11:32.607 "memory_domains": [ 00:11:32.607 { 00:11:32.607 "dma_device_id": "system", 00:11:32.607 "dma_device_type": 1 00:11:32.607 }, 00:11:32.607 { 00:11:32.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.607 "dma_device_type": 2 00:11:32.607 } 00:11:32.607 ], 00:11:32.607 "driver_specific": {} 00:11:32.607 } 00:11:32.607 ] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 BaseBdev4 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 [ 00:11:32.607 { 00:11:32.607 "name": "BaseBdev4", 00:11:32.607 "aliases": [ 00:11:32.607 "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6" 00:11:32.607 ], 00:11:32.607 "product_name": "Malloc disk", 00:11:32.607 "block_size": 512, 00:11:32.607 "num_blocks": 65536, 00:11:32.607 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:32.607 "assigned_rate_limits": { 00:11:32.607 "rw_ios_per_sec": 0, 00:11:32.607 "rw_mbytes_per_sec": 0, 00:11:32.607 "r_mbytes_per_sec": 0, 00:11:32.607 "w_mbytes_per_sec": 0 00:11:32.607 }, 00:11:32.607 "claimed": false, 00:11:32.607 "zoned": false, 00:11:32.607 "supported_io_types": { 00:11:32.607 "read": true, 00:11:32.607 "write": true, 00:11:32.607 "unmap": true, 00:11:32.607 "flush": true, 00:11:32.607 "reset": true, 00:11:32.607 "nvme_admin": false, 00:11:32.607 "nvme_io": false, 00:11:32.607 "nvme_io_md": false, 00:11:32.607 "write_zeroes": true, 00:11:32.607 "zcopy": true, 00:11:32.607 "get_zone_info": false, 00:11:32.607 "zone_management": false, 00:11:32.607 "zone_append": false, 00:11:32.607 "compare": false, 00:11:32.607 "compare_and_write": false, 00:11:32.607 "abort": true, 00:11:32.607 "seek_hole": false, 00:11:32.607 "seek_data": false, 00:11:32.607 "copy": true, 00:11:32.607 "nvme_iov_md": false 00:11:32.607 }, 00:11:32.607 "memory_domains": [ 00:11:32.607 { 00:11:32.607 "dma_device_id": "system", 00:11:32.607 "dma_device_type": 1 00:11:32.607 }, 00:11:32.607 { 00:11:32.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.607 "dma_device_type": 2 00:11:32.607 } 00:11:32.607 ], 00:11:32.607 "driver_specific": {} 00:11:32.607 } 00:11:32.607 ] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.607 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.607 [2024-09-28 08:49:10.531946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.607 [2024-09-28 08:49:10.532035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.607 [2024-09-28 08:49:10.532078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.608 [2024-09-28 08:49:10.534105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.608 [2024-09-28 08:49:10.534192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.608 "name": "Existed_Raid", 00:11:32.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.608 "strip_size_kb": 0, 00:11:32.608 "state": "configuring", 00:11:32.608 "raid_level": "raid1", 00:11:32.608 "superblock": false, 00:11:32.608 "num_base_bdevs": 4, 00:11:32.608 "num_base_bdevs_discovered": 3, 00:11:32.608 "num_base_bdevs_operational": 4, 00:11:32.608 "base_bdevs_list": [ 00:11:32.608 { 00:11:32.608 "name": "BaseBdev1", 00:11:32.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.608 "is_configured": false, 00:11:32.608 "data_offset": 0, 00:11:32.608 "data_size": 0 00:11:32.608 }, 00:11:32.608 { 00:11:32.608 "name": "BaseBdev2", 00:11:32.608 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:32.608 "is_configured": true, 00:11:32.608 "data_offset": 0, 00:11:32.608 "data_size": 65536 00:11:32.608 }, 00:11:32.608 { 00:11:32.608 "name": "BaseBdev3", 00:11:32.608 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:32.608 "is_configured": true, 00:11:32.608 "data_offset": 0, 00:11:32.608 "data_size": 65536 00:11:32.608 }, 00:11:32.608 { 00:11:32.608 "name": "BaseBdev4", 00:11:32.608 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:32.608 "is_configured": true, 00:11:32.608 "data_offset": 0, 00:11:32.608 "data_size": 65536 00:11:32.608 } 00:11:32.608 ] 00:11:32.608 }' 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.608 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.177 [2024-09-28 08:49:10.987198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.177 08:49:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.177 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.177 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.177 "name": "Existed_Raid", 00:11:33.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.177 "strip_size_kb": 0, 00:11:33.177 "state": "configuring", 00:11:33.177 "raid_level": "raid1", 00:11:33.177 "superblock": false, 00:11:33.177 "num_base_bdevs": 4, 00:11:33.177 "num_base_bdevs_discovered": 2, 00:11:33.177 "num_base_bdevs_operational": 4, 00:11:33.177 "base_bdevs_list": [ 00:11:33.177 { 00:11:33.177 "name": "BaseBdev1", 00:11:33.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.177 "is_configured": false, 00:11:33.177 "data_offset": 0, 00:11:33.177 "data_size": 0 00:11:33.177 }, 00:11:33.177 { 00:11:33.177 "name": null, 00:11:33.177 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:33.177 "is_configured": false, 00:11:33.177 "data_offset": 0, 00:11:33.177 "data_size": 65536 00:11:33.177 }, 00:11:33.177 { 00:11:33.177 "name": "BaseBdev3", 00:11:33.177 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:33.177 "is_configured": true, 00:11:33.177 "data_offset": 0, 00:11:33.177 "data_size": 65536 00:11:33.177 }, 00:11:33.177 { 00:11:33.177 "name": "BaseBdev4", 00:11:33.177 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:33.177 "is_configured": true, 00:11:33.177 "data_offset": 0, 00:11:33.177 "data_size": 65536 00:11:33.177 } 00:11:33.177 ] 00:11:33.177 }' 00:11:33.177 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.177 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 [2024-09-28 08:49:11.564782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.746 BaseBdev1 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.746 [ 00:11:33.746 { 00:11:33.746 "name": "BaseBdev1", 00:11:33.746 "aliases": [ 00:11:33.746 "ab64f637-4a5a-4336-b106-21aaa91bae28" 00:11:33.746 ], 00:11:33.746 "product_name": "Malloc disk", 00:11:33.746 "block_size": 512, 00:11:33.746 "num_blocks": 65536, 00:11:33.746 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:33.746 "assigned_rate_limits": { 00:11:33.746 "rw_ios_per_sec": 0, 00:11:33.746 "rw_mbytes_per_sec": 0, 00:11:33.746 "r_mbytes_per_sec": 0, 00:11:33.746 "w_mbytes_per_sec": 0 00:11:33.746 }, 00:11:33.746 "claimed": true, 00:11:33.746 "claim_type": "exclusive_write", 00:11:33.746 "zoned": false, 00:11:33.746 "supported_io_types": { 00:11:33.746 "read": true, 00:11:33.746 "write": true, 00:11:33.746 "unmap": true, 00:11:33.746 "flush": true, 00:11:33.746 "reset": true, 00:11:33.746 "nvme_admin": false, 00:11:33.746 "nvme_io": false, 00:11:33.746 "nvme_io_md": false, 00:11:33.746 "write_zeroes": true, 00:11:33.746 "zcopy": true, 00:11:33.746 "get_zone_info": false, 00:11:33.746 "zone_management": false, 00:11:33.746 "zone_append": false, 00:11:33.746 "compare": false, 00:11:33.746 "compare_and_write": false, 00:11:33.746 "abort": true, 00:11:33.746 "seek_hole": false, 00:11:33.746 "seek_data": false, 00:11:33.746 "copy": true, 00:11:33.746 "nvme_iov_md": false 00:11:33.746 }, 00:11:33.746 "memory_domains": [ 00:11:33.746 { 00:11:33.746 "dma_device_id": "system", 00:11:33.746 "dma_device_type": 1 00:11:33.746 }, 00:11:33.746 { 00:11:33.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.746 "dma_device_type": 2 00:11:33.746 } 00:11:33.746 ], 00:11:33.746 "driver_specific": {} 00:11:33.746 } 00:11:33.746 ] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.746 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.747 "name": "Existed_Raid", 00:11:33.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.747 "strip_size_kb": 0, 00:11:33.747 "state": "configuring", 00:11:33.747 "raid_level": "raid1", 00:11:33.747 "superblock": false, 00:11:33.747 "num_base_bdevs": 4, 00:11:33.747 "num_base_bdevs_discovered": 3, 00:11:33.747 "num_base_bdevs_operational": 4, 00:11:33.747 "base_bdevs_list": [ 00:11:33.747 { 00:11:33.747 "name": "BaseBdev1", 00:11:33.747 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:33.747 "is_configured": true, 00:11:33.747 "data_offset": 0, 00:11:33.747 "data_size": 65536 00:11:33.747 }, 00:11:33.747 { 00:11:33.747 "name": null, 00:11:33.747 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:33.747 "is_configured": false, 00:11:33.747 "data_offset": 0, 00:11:33.747 "data_size": 65536 00:11:33.747 }, 00:11:33.747 { 00:11:33.747 "name": "BaseBdev3", 00:11:33.747 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:33.747 "is_configured": true, 00:11:33.747 "data_offset": 0, 00:11:33.747 "data_size": 65536 00:11:33.747 }, 00:11:33.747 { 00:11:33.747 "name": "BaseBdev4", 00:11:33.747 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:33.747 "is_configured": true, 00:11:33.747 "data_offset": 0, 00:11:33.747 "data_size": 65536 00:11:33.747 } 00:11:33.747 ] 00:11:33.747 }' 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.747 08:49:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 [2024-09-28 08:49:12.103880] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.315 "name": "Existed_Raid", 00:11:34.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.315 "strip_size_kb": 0, 00:11:34.315 "state": "configuring", 00:11:34.315 "raid_level": "raid1", 00:11:34.315 "superblock": false, 00:11:34.315 "num_base_bdevs": 4, 00:11:34.315 "num_base_bdevs_discovered": 2, 00:11:34.315 "num_base_bdevs_operational": 4, 00:11:34.315 "base_bdevs_list": [ 00:11:34.315 { 00:11:34.315 "name": "BaseBdev1", 00:11:34.315 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:34.315 "is_configured": true, 00:11:34.315 "data_offset": 0, 00:11:34.315 "data_size": 65536 00:11:34.315 }, 00:11:34.315 { 00:11:34.315 "name": null, 00:11:34.315 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:34.315 "is_configured": false, 00:11:34.315 "data_offset": 0, 00:11:34.315 "data_size": 65536 00:11:34.315 }, 00:11:34.315 { 00:11:34.315 "name": null, 00:11:34.315 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:34.315 "is_configured": false, 00:11:34.315 "data_offset": 0, 00:11:34.315 "data_size": 65536 00:11:34.315 }, 00:11:34.315 { 00:11:34.315 "name": "BaseBdev4", 00:11:34.315 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:34.315 "is_configured": true, 00:11:34.315 "data_offset": 0, 00:11:34.315 "data_size": 65536 00:11:34.315 } 00:11:34.315 ] 00:11:34.315 }' 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.315 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.574 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.574 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.574 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.574 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.574 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 [2024-09-28 08:49:12.587095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.834 "name": "Existed_Raid", 00:11:34.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.834 "strip_size_kb": 0, 00:11:34.834 "state": "configuring", 00:11:34.834 "raid_level": "raid1", 00:11:34.834 "superblock": false, 00:11:34.834 "num_base_bdevs": 4, 00:11:34.834 "num_base_bdevs_discovered": 3, 00:11:34.834 "num_base_bdevs_operational": 4, 00:11:34.834 "base_bdevs_list": [ 00:11:34.834 { 00:11:34.834 "name": "BaseBdev1", 00:11:34.834 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:34.834 "is_configured": true, 00:11:34.834 "data_offset": 0, 00:11:34.834 "data_size": 65536 00:11:34.834 }, 00:11:34.834 { 00:11:34.834 "name": null, 00:11:34.834 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:34.834 "is_configured": false, 00:11:34.834 "data_offset": 0, 00:11:34.834 "data_size": 65536 00:11:34.834 }, 00:11:34.834 { 00:11:34.834 "name": "BaseBdev3", 00:11:34.834 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:34.834 "is_configured": true, 00:11:34.834 "data_offset": 0, 00:11:34.834 "data_size": 65536 00:11:34.834 }, 00:11:34.834 { 00:11:34.834 "name": "BaseBdev4", 00:11:34.834 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:34.834 "is_configured": true, 00:11:34.834 "data_offset": 0, 00:11:34.834 "data_size": 65536 00:11:34.834 } 00:11:34.834 ] 00:11:34.834 }' 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.834 08:49:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.094 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.094 [2024-09-28 08:49:13.082290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.353 "name": "Existed_Raid", 00:11:35.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.353 "strip_size_kb": 0, 00:11:35.353 "state": "configuring", 00:11:35.353 "raid_level": "raid1", 00:11:35.353 "superblock": false, 00:11:35.353 "num_base_bdevs": 4, 00:11:35.353 "num_base_bdevs_discovered": 2, 00:11:35.353 "num_base_bdevs_operational": 4, 00:11:35.353 "base_bdevs_list": [ 00:11:35.353 { 00:11:35.353 "name": null, 00:11:35.353 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:35.353 "is_configured": false, 00:11:35.353 "data_offset": 0, 00:11:35.353 "data_size": 65536 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": null, 00:11:35.353 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:35.353 "is_configured": false, 00:11:35.353 "data_offset": 0, 00:11:35.353 "data_size": 65536 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": "BaseBdev3", 00:11:35.353 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:35.353 "is_configured": true, 00:11:35.353 "data_offset": 0, 00:11:35.353 "data_size": 65536 00:11:35.353 }, 00:11:35.353 { 00:11:35.353 "name": "BaseBdev4", 00:11:35.353 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:35.353 "is_configured": true, 00:11:35.353 "data_offset": 0, 00:11:35.353 "data_size": 65536 00:11:35.353 } 00:11:35.353 ] 00:11:35.353 }' 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.353 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.613 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.613 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.613 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.613 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.873 [2024-09-28 08:49:13.641439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.873 "name": "Existed_Raid", 00:11:35.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.873 "strip_size_kb": 0, 00:11:35.873 "state": "configuring", 00:11:35.873 "raid_level": "raid1", 00:11:35.873 "superblock": false, 00:11:35.873 "num_base_bdevs": 4, 00:11:35.873 "num_base_bdevs_discovered": 3, 00:11:35.873 "num_base_bdevs_operational": 4, 00:11:35.873 "base_bdevs_list": [ 00:11:35.873 { 00:11:35.873 "name": null, 00:11:35.873 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:35.873 "is_configured": false, 00:11:35.873 "data_offset": 0, 00:11:35.873 "data_size": 65536 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "BaseBdev2", 00:11:35.873 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 0, 00:11:35.873 "data_size": 65536 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "BaseBdev3", 00:11:35.873 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 0, 00:11:35.873 "data_size": 65536 00:11:35.873 }, 00:11:35.873 { 00:11:35.873 "name": "BaseBdev4", 00:11:35.873 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:35.873 "is_configured": true, 00:11:35.873 "data_offset": 0, 00:11:35.873 "data_size": 65536 00:11:35.873 } 00:11:35.873 ] 00:11:35.873 }' 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.873 08:49:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.133 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:36.133 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.133 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.133 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.133 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab64f637-4a5a-4336-b106-21aaa91bae28 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 [2024-09-28 08:49:14.222195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:36.393 [2024-09-28 08:49:14.222247] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.393 [2024-09-28 08:49:14.222256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:36.393 [2024-09-28 08:49:14.222540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:36.393 [2024-09-28 08:49:14.222744] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.393 [2024-09-28 08:49:14.222756] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:36.393 NewBaseBdev 00:11:36.393 [2024-09-28 08:49:14.223021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 [ 00:11:36.393 { 00:11:36.393 "name": "NewBaseBdev", 00:11:36.393 "aliases": [ 00:11:36.393 "ab64f637-4a5a-4336-b106-21aaa91bae28" 00:11:36.393 ], 00:11:36.393 "product_name": "Malloc disk", 00:11:36.393 "block_size": 512, 00:11:36.393 "num_blocks": 65536, 00:11:36.393 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:36.393 "assigned_rate_limits": { 00:11:36.393 "rw_ios_per_sec": 0, 00:11:36.393 "rw_mbytes_per_sec": 0, 00:11:36.393 "r_mbytes_per_sec": 0, 00:11:36.393 "w_mbytes_per_sec": 0 00:11:36.393 }, 00:11:36.393 "claimed": true, 00:11:36.393 "claim_type": "exclusive_write", 00:11:36.393 "zoned": false, 00:11:36.393 "supported_io_types": { 00:11:36.393 "read": true, 00:11:36.393 "write": true, 00:11:36.393 "unmap": true, 00:11:36.393 "flush": true, 00:11:36.393 "reset": true, 00:11:36.393 "nvme_admin": false, 00:11:36.393 "nvme_io": false, 00:11:36.393 "nvme_io_md": false, 00:11:36.393 "write_zeroes": true, 00:11:36.393 "zcopy": true, 00:11:36.393 "get_zone_info": false, 00:11:36.393 "zone_management": false, 00:11:36.393 "zone_append": false, 00:11:36.393 "compare": false, 00:11:36.393 "compare_and_write": false, 00:11:36.393 "abort": true, 00:11:36.393 "seek_hole": false, 00:11:36.393 "seek_data": false, 00:11:36.393 "copy": true, 00:11:36.393 "nvme_iov_md": false 00:11:36.393 }, 00:11:36.393 "memory_domains": [ 00:11:36.393 { 00:11:36.393 "dma_device_id": "system", 00:11:36.393 "dma_device_type": 1 00:11:36.393 }, 00:11:36.393 { 00:11:36.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.393 "dma_device_type": 2 00:11:36.393 } 00:11:36.393 ], 00:11:36.393 "driver_specific": {} 00:11:36.393 } 00:11:36.393 ] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.393 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.393 "name": "Existed_Raid", 00:11:36.393 "uuid": "288b6df5-0958-4e84-accd-c46656148651", 00:11:36.393 "strip_size_kb": 0, 00:11:36.393 "state": "online", 00:11:36.393 "raid_level": "raid1", 00:11:36.393 "superblock": false, 00:11:36.393 "num_base_bdevs": 4, 00:11:36.393 "num_base_bdevs_discovered": 4, 00:11:36.393 "num_base_bdevs_operational": 4, 00:11:36.393 "base_bdevs_list": [ 00:11:36.394 { 00:11:36.394 "name": "NewBaseBdev", 00:11:36.394 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:36.394 "is_configured": true, 00:11:36.394 "data_offset": 0, 00:11:36.394 "data_size": 65536 00:11:36.394 }, 00:11:36.394 { 00:11:36.394 "name": "BaseBdev2", 00:11:36.394 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:36.394 "is_configured": true, 00:11:36.394 "data_offset": 0, 00:11:36.394 "data_size": 65536 00:11:36.394 }, 00:11:36.394 { 00:11:36.394 "name": "BaseBdev3", 00:11:36.394 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:36.394 "is_configured": true, 00:11:36.394 "data_offset": 0, 00:11:36.394 "data_size": 65536 00:11:36.394 }, 00:11:36.394 { 00:11:36.394 "name": "BaseBdev4", 00:11:36.394 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:36.394 "is_configured": true, 00:11:36.394 "data_offset": 0, 00:11:36.394 "data_size": 65536 00:11:36.394 } 00:11:36.394 ] 00:11:36.394 }' 00:11:36.394 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.394 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.964 [2024-09-28 08:49:14.717706] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.964 "name": "Existed_Raid", 00:11:36.964 "aliases": [ 00:11:36.964 "288b6df5-0958-4e84-accd-c46656148651" 00:11:36.964 ], 00:11:36.964 "product_name": "Raid Volume", 00:11:36.964 "block_size": 512, 00:11:36.964 "num_blocks": 65536, 00:11:36.964 "uuid": "288b6df5-0958-4e84-accd-c46656148651", 00:11:36.964 "assigned_rate_limits": { 00:11:36.964 "rw_ios_per_sec": 0, 00:11:36.964 "rw_mbytes_per_sec": 0, 00:11:36.964 "r_mbytes_per_sec": 0, 00:11:36.964 "w_mbytes_per_sec": 0 00:11:36.964 }, 00:11:36.964 "claimed": false, 00:11:36.964 "zoned": false, 00:11:36.964 "supported_io_types": { 00:11:36.964 "read": true, 00:11:36.964 "write": true, 00:11:36.964 "unmap": false, 00:11:36.964 "flush": false, 00:11:36.964 "reset": true, 00:11:36.964 "nvme_admin": false, 00:11:36.964 "nvme_io": false, 00:11:36.964 "nvme_io_md": false, 00:11:36.964 "write_zeroes": true, 00:11:36.964 "zcopy": false, 00:11:36.964 "get_zone_info": false, 00:11:36.964 "zone_management": false, 00:11:36.964 "zone_append": false, 00:11:36.964 "compare": false, 00:11:36.964 "compare_and_write": false, 00:11:36.964 "abort": false, 00:11:36.964 "seek_hole": false, 00:11:36.964 "seek_data": false, 00:11:36.964 "copy": false, 00:11:36.964 "nvme_iov_md": false 00:11:36.964 }, 00:11:36.964 "memory_domains": [ 00:11:36.964 { 00:11:36.964 "dma_device_id": "system", 00:11:36.964 "dma_device_type": 1 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.964 "dma_device_type": 2 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "system", 00:11:36.964 "dma_device_type": 1 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.964 "dma_device_type": 2 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "system", 00:11:36.964 "dma_device_type": 1 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.964 "dma_device_type": 2 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "system", 00:11:36.964 "dma_device_type": 1 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.964 "dma_device_type": 2 00:11:36.964 } 00:11:36.964 ], 00:11:36.964 "driver_specific": { 00:11:36.964 "raid": { 00:11:36.964 "uuid": "288b6df5-0958-4e84-accd-c46656148651", 00:11:36.964 "strip_size_kb": 0, 00:11:36.964 "state": "online", 00:11:36.964 "raid_level": "raid1", 00:11:36.964 "superblock": false, 00:11:36.964 "num_base_bdevs": 4, 00:11:36.964 "num_base_bdevs_discovered": 4, 00:11:36.964 "num_base_bdevs_operational": 4, 00:11:36.964 "base_bdevs_list": [ 00:11:36.964 { 00:11:36.964 "name": "NewBaseBdev", 00:11:36.964 "uuid": "ab64f637-4a5a-4336-b106-21aaa91bae28", 00:11:36.964 "is_configured": true, 00:11:36.964 "data_offset": 0, 00:11:36.964 "data_size": 65536 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "name": "BaseBdev2", 00:11:36.964 "uuid": "153ea208-444e-4138-9fbc-6f89886dc0f9", 00:11:36.964 "is_configured": true, 00:11:36.964 "data_offset": 0, 00:11:36.964 "data_size": 65536 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "name": "BaseBdev3", 00:11:36.964 "uuid": "5885636c-b178-496f-bcb7-d3cc6cd93816", 00:11:36.964 "is_configured": true, 00:11:36.964 "data_offset": 0, 00:11:36.964 "data_size": 65536 00:11:36.964 }, 00:11:36.964 { 00:11:36.964 "name": "BaseBdev4", 00:11:36.964 "uuid": "bdcca8a1-2917-4b49-9a5d-cb9fae1f8ee6", 00:11:36.964 "is_configured": true, 00:11:36.964 "data_offset": 0, 00:11:36.964 "data_size": 65536 00:11:36.964 } 00:11:36.964 ] 00:11:36.964 } 00:11:36.964 } 00:11:36.964 }' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:36.964 BaseBdev2 00:11:36.964 BaseBdev3 00:11:36.964 BaseBdev4' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.964 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.224 08:49:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.224 [2024-09-28 08:49:15.044785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.224 [2024-09-28 08:49:15.044850] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.224 [2024-09-28 08:49:15.044970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.224 [2024-09-28 08:49:15.045303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.224 [2024-09-28 08:49:15.045360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73193 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73193 ']' 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73193 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:37.224 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73193 00:11:37.225 killing process with pid 73193 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73193' 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73193 00:11:37.225 [2024-09-28 08:49:15.092603] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.225 08:49:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73193 00:11:37.795 [2024-09-28 08:49:15.505323] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:39.191 00:11:39.191 real 0m11.819s 00:11:39.191 user 0m18.473s 00:11:39.191 sys 0m2.171s 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.191 ************************************ 00:11:39.191 END TEST raid_state_function_test 00:11:39.191 ************************************ 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.191 08:49:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:39.191 08:49:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:39.191 08:49:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.191 08:49:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.191 ************************************ 00:11:39.191 START TEST raid_state_function_test_sb 00:11:39.191 ************************************ 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73865 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73865' 00:11:39.191 Process raid pid: 73865 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73865 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73865 ']' 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.191 08:49:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.191 [2024-09-28 08:49:17.008277] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:39.192 [2024-09-28 08:49:17.008488] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.192 [2024-09-28 08:49:17.177804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.451 [2024-09-28 08:49:17.420278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.711 [2024-09-28 08:49:17.656524] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.711 [2024-09-28 08:49:17.656675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 [2024-09-28 08:49:17.833703] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.971 [2024-09-28 08:49:17.833800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.971 [2024-09-28 08:49:17.833831] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.971 [2024-09-28 08:49:17.833854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.971 [2024-09-28 08:49:17.833871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.971 [2024-09-28 08:49:17.833894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.971 [2024-09-28 08:49:17.833911] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:39.971 [2024-09-28 08:49:17.833948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.971 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.972 "name": "Existed_Raid", 00:11:39.972 "uuid": "c14ce945-5a2f-4258-b807-a0ce34e65bb2", 00:11:39.972 "strip_size_kb": 0, 00:11:39.972 "state": "configuring", 00:11:39.972 "raid_level": "raid1", 00:11:39.972 "superblock": true, 00:11:39.972 "num_base_bdevs": 4, 00:11:39.972 "num_base_bdevs_discovered": 0, 00:11:39.972 "num_base_bdevs_operational": 4, 00:11:39.972 "base_bdevs_list": [ 00:11:39.972 { 00:11:39.972 "name": "BaseBdev1", 00:11:39.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.972 "is_configured": false, 00:11:39.972 "data_offset": 0, 00:11:39.972 "data_size": 0 00:11:39.972 }, 00:11:39.972 { 00:11:39.972 "name": "BaseBdev2", 00:11:39.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.972 "is_configured": false, 00:11:39.972 "data_offset": 0, 00:11:39.972 "data_size": 0 00:11:39.972 }, 00:11:39.972 { 00:11:39.972 "name": "BaseBdev3", 00:11:39.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.972 "is_configured": false, 00:11:39.972 "data_offset": 0, 00:11:39.972 "data_size": 0 00:11:39.972 }, 00:11:39.972 { 00:11:39.972 "name": "BaseBdev4", 00:11:39.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.972 "is_configured": false, 00:11:39.972 "data_offset": 0, 00:11:39.972 "data_size": 0 00:11:39.972 } 00:11:39.972 ] 00:11:39.972 }' 00:11:39.972 08:49:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.972 08:49:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.541 [2024-09-28 08:49:18.308814] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.541 [2024-09-28 08:49:18.308900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.541 [2024-09-28 08:49:18.320816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.541 [2024-09-28 08:49:18.320890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.541 [2024-09-28 08:49:18.320933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.541 [2024-09-28 08:49:18.320956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.541 [2024-09-28 08:49:18.320975] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.541 [2024-09-28 08:49:18.320996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.541 [2024-09-28 08:49:18.321013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.541 [2024-09-28 08:49:18.321044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.541 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.542 [2024-09-28 08:49:18.410678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.542 BaseBdev1 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.542 [ 00:11:40.542 { 00:11:40.542 "name": "BaseBdev1", 00:11:40.542 "aliases": [ 00:11:40.542 "ca8db13a-4181-4b39-a8b4-5442c184013a" 00:11:40.542 ], 00:11:40.542 "product_name": "Malloc disk", 00:11:40.542 "block_size": 512, 00:11:40.542 "num_blocks": 65536, 00:11:40.542 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:40.542 "assigned_rate_limits": { 00:11:40.542 "rw_ios_per_sec": 0, 00:11:40.542 "rw_mbytes_per_sec": 0, 00:11:40.542 "r_mbytes_per_sec": 0, 00:11:40.542 "w_mbytes_per_sec": 0 00:11:40.542 }, 00:11:40.542 "claimed": true, 00:11:40.542 "claim_type": "exclusive_write", 00:11:40.542 "zoned": false, 00:11:40.542 "supported_io_types": { 00:11:40.542 "read": true, 00:11:40.542 "write": true, 00:11:40.542 "unmap": true, 00:11:40.542 "flush": true, 00:11:40.542 "reset": true, 00:11:40.542 "nvme_admin": false, 00:11:40.542 "nvme_io": false, 00:11:40.542 "nvme_io_md": false, 00:11:40.542 "write_zeroes": true, 00:11:40.542 "zcopy": true, 00:11:40.542 "get_zone_info": false, 00:11:40.542 "zone_management": false, 00:11:40.542 "zone_append": false, 00:11:40.542 "compare": false, 00:11:40.542 "compare_and_write": false, 00:11:40.542 "abort": true, 00:11:40.542 "seek_hole": false, 00:11:40.542 "seek_data": false, 00:11:40.542 "copy": true, 00:11:40.542 "nvme_iov_md": false 00:11:40.542 }, 00:11:40.542 "memory_domains": [ 00:11:40.542 { 00:11:40.542 "dma_device_id": "system", 00:11:40.542 "dma_device_type": 1 00:11:40.542 }, 00:11:40.542 { 00:11:40.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.542 "dma_device_type": 2 00:11:40.542 } 00:11:40.542 ], 00:11:40.542 "driver_specific": {} 00:11:40.542 } 00:11:40.542 ] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.542 "name": "Existed_Raid", 00:11:40.542 "uuid": "dcfbc8d3-8fd6-470a-857f-20029788d648", 00:11:40.542 "strip_size_kb": 0, 00:11:40.542 "state": "configuring", 00:11:40.542 "raid_level": "raid1", 00:11:40.542 "superblock": true, 00:11:40.542 "num_base_bdevs": 4, 00:11:40.542 "num_base_bdevs_discovered": 1, 00:11:40.542 "num_base_bdevs_operational": 4, 00:11:40.542 "base_bdevs_list": [ 00:11:40.542 { 00:11:40.542 "name": "BaseBdev1", 00:11:40.542 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:40.542 "is_configured": true, 00:11:40.542 "data_offset": 2048, 00:11:40.542 "data_size": 63488 00:11:40.542 }, 00:11:40.542 { 00:11:40.542 "name": "BaseBdev2", 00:11:40.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.542 "is_configured": false, 00:11:40.542 "data_offset": 0, 00:11:40.542 "data_size": 0 00:11:40.542 }, 00:11:40.542 { 00:11:40.542 "name": "BaseBdev3", 00:11:40.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.542 "is_configured": false, 00:11:40.542 "data_offset": 0, 00:11:40.542 "data_size": 0 00:11:40.542 }, 00:11:40.542 { 00:11:40.542 "name": "BaseBdev4", 00:11:40.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.542 "is_configured": false, 00:11:40.542 "data_offset": 0, 00:11:40.542 "data_size": 0 00:11:40.542 } 00:11:40.542 ] 00:11:40.542 }' 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.542 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.111 [2024-09-28 08:49:18.841911] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.111 [2024-09-28 08:49:18.841997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.111 [2024-09-28 08:49:18.853949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.111 [2024-09-28 08:49:18.856062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.111 [2024-09-28 08:49:18.856139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.111 [2024-09-28 08:49:18.856168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.111 [2024-09-28 08:49:18.856191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.111 [2024-09-28 08:49:18.856210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.111 [2024-09-28 08:49:18.856230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.111 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.112 "name": "Existed_Raid", 00:11:41.112 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:41.112 "strip_size_kb": 0, 00:11:41.112 "state": "configuring", 00:11:41.112 "raid_level": "raid1", 00:11:41.112 "superblock": true, 00:11:41.112 "num_base_bdevs": 4, 00:11:41.112 "num_base_bdevs_discovered": 1, 00:11:41.112 "num_base_bdevs_operational": 4, 00:11:41.112 "base_bdevs_list": [ 00:11:41.112 { 00:11:41.112 "name": "BaseBdev1", 00:11:41.112 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:41.112 "is_configured": true, 00:11:41.112 "data_offset": 2048, 00:11:41.112 "data_size": 63488 00:11:41.112 }, 00:11:41.112 { 00:11:41.112 "name": "BaseBdev2", 00:11:41.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.112 "is_configured": false, 00:11:41.112 "data_offset": 0, 00:11:41.112 "data_size": 0 00:11:41.112 }, 00:11:41.112 { 00:11:41.112 "name": "BaseBdev3", 00:11:41.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.112 "is_configured": false, 00:11:41.112 "data_offset": 0, 00:11:41.112 "data_size": 0 00:11:41.112 }, 00:11:41.112 { 00:11:41.112 "name": "BaseBdev4", 00:11:41.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.112 "is_configured": false, 00:11:41.112 "data_offset": 0, 00:11:41.112 "data_size": 0 00:11:41.112 } 00:11:41.112 ] 00:11:41.112 }' 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.112 08:49:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.372 [2024-09-28 08:49:19.360361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.372 BaseBdev2 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.372 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.632 [ 00:11:41.632 { 00:11:41.632 "name": "BaseBdev2", 00:11:41.632 "aliases": [ 00:11:41.632 "d520cde8-6fe3-438d-905d-34590f3af8ea" 00:11:41.632 ], 00:11:41.632 "product_name": "Malloc disk", 00:11:41.632 "block_size": 512, 00:11:41.632 "num_blocks": 65536, 00:11:41.632 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:41.632 "assigned_rate_limits": { 00:11:41.632 "rw_ios_per_sec": 0, 00:11:41.632 "rw_mbytes_per_sec": 0, 00:11:41.632 "r_mbytes_per_sec": 0, 00:11:41.632 "w_mbytes_per_sec": 0 00:11:41.632 }, 00:11:41.632 "claimed": true, 00:11:41.632 "claim_type": "exclusive_write", 00:11:41.632 "zoned": false, 00:11:41.632 "supported_io_types": { 00:11:41.632 "read": true, 00:11:41.632 "write": true, 00:11:41.632 "unmap": true, 00:11:41.632 "flush": true, 00:11:41.632 "reset": true, 00:11:41.632 "nvme_admin": false, 00:11:41.632 "nvme_io": false, 00:11:41.632 "nvme_io_md": false, 00:11:41.632 "write_zeroes": true, 00:11:41.632 "zcopy": true, 00:11:41.632 "get_zone_info": false, 00:11:41.632 "zone_management": false, 00:11:41.632 "zone_append": false, 00:11:41.632 "compare": false, 00:11:41.632 "compare_and_write": false, 00:11:41.632 "abort": true, 00:11:41.632 "seek_hole": false, 00:11:41.632 "seek_data": false, 00:11:41.632 "copy": true, 00:11:41.632 "nvme_iov_md": false 00:11:41.632 }, 00:11:41.632 "memory_domains": [ 00:11:41.632 { 00:11:41.632 "dma_device_id": "system", 00:11:41.632 "dma_device_type": 1 00:11:41.632 }, 00:11:41.632 { 00:11:41.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.632 "dma_device_type": 2 00:11:41.632 } 00:11:41.632 ], 00:11:41.632 "driver_specific": {} 00:11:41.632 } 00:11:41.632 ] 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.632 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.633 "name": "Existed_Raid", 00:11:41.633 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:41.633 "strip_size_kb": 0, 00:11:41.633 "state": "configuring", 00:11:41.633 "raid_level": "raid1", 00:11:41.633 "superblock": true, 00:11:41.633 "num_base_bdevs": 4, 00:11:41.633 "num_base_bdevs_discovered": 2, 00:11:41.633 "num_base_bdevs_operational": 4, 00:11:41.633 "base_bdevs_list": [ 00:11:41.633 { 00:11:41.633 "name": "BaseBdev1", 00:11:41.633 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:41.633 "is_configured": true, 00:11:41.633 "data_offset": 2048, 00:11:41.633 "data_size": 63488 00:11:41.633 }, 00:11:41.633 { 00:11:41.633 "name": "BaseBdev2", 00:11:41.633 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:41.633 "is_configured": true, 00:11:41.633 "data_offset": 2048, 00:11:41.633 "data_size": 63488 00:11:41.633 }, 00:11:41.633 { 00:11:41.633 "name": "BaseBdev3", 00:11:41.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.633 "is_configured": false, 00:11:41.633 "data_offset": 0, 00:11:41.633 "data_size": 0 00:11:41.633 }, 00:11:41.633 { 00:11:41.633 "name": "BaseBdev4", 00:11:41.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.633 "is_configured": false, 00:11:41.633 "data_offset": 0, 00:11:41.633 "data_size": 0 00:11:41.633 } 00:11:41.633 ] 00:11:41.633 }' 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.633 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.892 [2024-09-28 08:49:19.865823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.892 BaseBdev3 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.892 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.152 [ 00:11:42.152 { 00:11:42.152 "name": "BaseBdev3", 00:11:42.152 "aliases": [ 00:11:42.152 "471cc269-cd2f-4689-9881-6d66a1f00d08" 00:11:42.152 ], 00:11:42.152 "product_name": "Malloc disk", 00:11:42.152 "block_size": 512, 00:11:42.152 "num_blocks": 65536, 00:11:42.152 "uuid": "471cc269-cd2f-4689-9881-6d66a1f00d08", 00:11:42.152 "assigned_rate_limits": { 00:11:42.152 "rw_ios_per_sec": 0, 00:11:42.152 "rw_mbytes_per_sec": 0, 00:11:42.152 "r_mbytes_per_sec": 0, 00:11:42.152 "w_mbytes_per_sec": 0 00:11:42.152 }, 00:11:42.152 "claimed": true, 00:11:42.152 "claim_type": "exclusive_write", 00:11:42.152 "zoned": false, 00:11:42.152 "supported_io_types": { 00:11:42.152 "read": true, 00:11:42.152 "write": true, 00:11:42.152 "unmap": true, 00:11:42.152 "flush": true, 00:11:42.152 "reset": true, 00:11:42.152 "nvme_admin": false, 00:11:42.152 "nvme_io": false, 00:11:42.152 "nvme_io_md": false, 00:11:42.152 "write_zeroes": true, 00:11:42.152 "zcopy": true, 00:11:42.152 "get_zone_info": false, 00:11:42.152 "zone_management": false, 00:11:42.152 "zone_append": false, 00:11:42.152 "compare": false, 00:11:42.152 "compare_and_write": false, 00:11:42.152 "abort": true, 00:11:42.152 "seek_hole": false, 00:11:42.152 "seek_data": false, 00:11:42.152 "copy": true, 00:11:42.152 "nvme_iov_md": false 00:11:42.152 }, 00:11:42.152 "memory_domains": [ 00:11:42.152 { 00:11:42.152 "dma_device_id": "system", 00:11:42.152 "dma_device_type": 1 00:11:42.152 }, 00:11:42.152 { 00:11:42.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.152 "dma_device_type": 2 00:11:42.152 } 00:11:42.152 ], 00:11:42.152 "driver_specific": {} 00:11:42.152 } 00:11:42.152 ] 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.152 "name": "Existed_Raid", 00:11:42.152 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:42.152 "strip_size_kb": 0, 00:11:42.152 "state": "configuring", 00:11:42.152 "raid_level": "raid1", 00:11:42.152 "superblock": true, 00:11:42.152 "num_base_bdevs": 4, 00:11:42.152 "num_base_bdevs_discovered": 3, 00:11:42.152 "num_base_bdevs_operational": 4, 00:11:42.152 "base_bdevs_list": [ 00:11:42.152 { 00:11:42.152 "name": "BaseBdev1", 00:11:42.152 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:42.152 "is_configured": true, 00:11:42.152 "data_offset": 2048, 00:11:42.152 "data_size": 63488 00:11:42.152 }, 00:11:42.152 { 00:11:42.152 "name": "BaseBdev2", 00:11:42.152 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:42.152 "is_configured": true, 00:11:42.152 "data_offset": 2048, 00:11:42.152 "data_size": 63488 00:11:42.152 }, 00:11:42.152 { 00:11:42.152 "name": "BaseBdev3", 00:11:42.152 "uuid": "471cc269-cd2f-4689-9881-6d66a1f00d08", 00:11:42.152 "is_configured": true, 00:11:42.152 "data_offset": 2048, 00:11:42.152 "data_size": 63488 00:11:42.152 }, 00:11:42.152 { 00:11:42.152 "name": "BaseBdev4", 00:11:42.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.152 "is_configured": false, 00:11:42.152 "data_offset": 0, 00:11:42.152 "data_size": 0 00:11:42.152 } 00:11:42.152 ] 00:11:42.152 }' 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.152 08:49:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.412 [2024-09-28 08:49:20.376802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.412 [2024-09-28 08:49:20.377180] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.412 BaseBdev4 00:11:42.412 [2024-09-28 08:49:20.377238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.412 [2024-09-28 08:49:20.377553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.412 [2024-09-28 08:49:20.377740] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.412 [2024-09-28 08:49:20.377758] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:42.412 [2024-09-28 08:49:20.377913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.412 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.412 [ 00:11:42.412 { 00:11:42.412 "name": "BaseBdev4", 00:11:42.412 "aliases": [ 00:11:42.412 "bdad0fc2-9906-4830-a24a-375d7430e63d" 00:11:42.412 ], 00:11:42.412 "product_name": "Malloc disk", 00:11:42.412 "block_size": 512, 00:11:42.412 "num_blocks": 65536, 00:11:42.412 "uuid": "bdad0fc2-9906-4830-a24a-375d7430e63d", 00:11:42.412 "assigned_rate_limits": { 00:11:42.412 "rw_ios_per_sec": 0, 00:11:42.412 "rw_mbytes_per_sec": 0, 00:11:42.412 "r_mbytes_per_sec": 0, 00:11:42.412 "w_mbytes_per_sec": 0 00:11:42.412 }, 00:11:42.412 "claimed": true, 00:11:42.412 "claim_type": "exclusive_write", 00:11:42.670 "zoned": false, 00:11:42.670 "supported_io_types": { 00:11:42.670 "read": true, 00:11:42.670 "write": true, 00:11:42.670 "unmap": true, 00:11:42.670 "flush": true, 00:11:42.670 "reset": true, 00:11:42.670 "nvme_admin": false, 00:11:42.670 "nvme_io": false, 00:11:42.670 "nvme_io_md": false, 00:11:42.670 "write_zeroes": true, 00:11:42.670 "zcopy": true, 00:11:42.670 "get_zone_info": false, 00:11:42.670 "zone_management": false, 00:11:42.670 "zone_append": false, 00:11:42.670 "compare": false, 00:11:42.670 "compare_and_write": false, 00:11:42.670 "abort": true, 00:11:42.670 "seek_hole": false, 00:11:42.670 "seek_data": false, 00:11:42.670 "copy": true, 00:11:42.671 "nvme_iov_md": false 00:11:42.671 }, 00:11:42.671 "memory_domains": [ 00:11:42.671 { 00:11:42.671 "dma_device_id": "system", 00:11:42.671 "dma_device_type": 1 00:11:42.671 }, 00:11:42.671 { 00:11:42.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.671 "dma_device_type": 2 00:11:42.671 } 00:11:42.671 ], 00:11:42.671 "driver_specific": {} 00:11:42.671 } 00:11:42.671 ] 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.671 "name": "Existed_Raid", 00:11:42.671 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:42.671 "strip_size_kb": 0, 00:11:42.671 "state": "online", 00:11:42.671 "raid_level": "raid1", 00:11:42.671 "superblock": true, 00:11:42.671 "num_base_bdevs": 4, 00:11:42.671 "num_base_bdevs_discovered": 4, 00:11:42.671 "num_base_bdevs_operational": 4, 00:11:42.671 "base_bdevs_list": [ 00:11:42.671 { 00:11:42.671 "name": "BaseBdev1", 00:11:42.671 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:42.671 "is_configured": true, 00:11:42.671 "data_offset": 2048, 00:11:42.671 "data_size": 63488 00:11:42.671 }, 00:11:42.671 { 00:11:42.671 "name": "BaseBdev2", 00:11:42.671 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:42.671 "is_configured": true, 00:11:42.671 "data_offset": 2048, 00:11:42.671 "data_size": 63488 00:11:42.671 }, 00:11:42.671 { 00:11:42.671 "name": "BaseBdev3", 00:11:42.671 "uuid": "471cc269-cd2f-4689-9881-6d66a1f00d08", 00:11:42.671 "is_configured": true, 00:11:42.671 "data_offset": 2048, 00:11:42.671 "data_size": 63488 00:11:42.671 }, 00:11:42.671 { 00:11:42.671 "name": "BaseBdev4", 00:11:42.671 "uuid": "bdad0fc2-9906-4830-a24a-375d7430e63d", 00:11:42.671 "is_configured": true, 00:11:42.671 "data_offset": 2048, 00:11:42.671 "data_size": 63488 00:11:42.671 } 00:11:42.671 ] 00:11:42.671 }' 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.671 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.930 [2024-09-28 08:49:20.860282] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.930 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.930 "name": "Existed_Raid", 00:11:42.930 "aliases": [ 00:11:42.930 "20a76fb8-65e9-4323-9db0-9f01cb912d76" 00:11:42.930 ], 00:11:42.930 "product_name": "Raid Volume", 00:11:42.930 "block_size": 512, 00:11:42.930 "num_blocks": 63488, 00:11:42.930 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:42.930 "assigned_rate_limits": { 00:11:42.930 "rw_ios_per_sec": 0, 00:11:42.930 "rw_mbytes_per_sec": 0, 00:11:42.930 "r_mbytes_per_sec": 0, 00:11:42.930 "w_mbytes_per_sec": 0 00:11:42.930 }, 00:11:42.930 "claimed": false, 00:11:42.930 "zoned": false, 00:11:42.930 "supported_io_types": { 00:11:42.930 "read": true, 00:11:42.930 "write": true, 00:11:42.930 "unmap": false, 00:11:42.930 "flush": false, 00:11:42.930 "reset": true, 00:11:42.930 "nvme_admin": false, 00:11:42.930 "nvme_io": false, 00:11:42.930 "nvme_io_md": false, 00:11:42.930 "write_zeroes": true, 00:11:42.930 "zcopy": false, 00:11:42.930 "get_zone_info": false, 00:11:42.930 "zone_management": false, 00:11:42.930 "zone_append": false, 00:11:42.930 "compare": false, 00:11:42.930 "compare_and_write": false, 00:11:42.930 "abort": false, 00:11:42.930 "seek_hole": false, 00:11:42.930 "seek_data": false, 00:11:42.930 "copy": false, 00:11:42.930 "nvme_iov_md": false 00:11:42.930 }, 00:11:42.930 "memory_domains": [ 00:11:42.930 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "system", 00:11:42.931 "dma_device_type": 1 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.931 "dma_device_type": 2 00:11:42.931 } 00:11:42.931 ], 00:11:42.931 "driver_specific": { 00:11:42.931 "raid": { 00:11:42.931 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:42.931 "strip_size_kb": 0, 00:11:42.931 "state": "online", 00:11:42.931 "raid_level": "raid1", 00:11:42.931 "superblock": true, 00:11:42.931 "num_base_bdevs": 4, 00:11:42.931 "num_base_bdevs_discovered": 4, 00:11:42.931 "num_base_bdevs_operational": 4, 00:11:42.931 "base_bdevs_list": [ 00:11:42.931 { 00:11:42.931 "name": "BaseBdev1", 00:11:42.931 "uuid": "ca8db13a-4181-4b39-a8b4-5442c184013a", 00:11:42.931 "is_configured": true, 00:11:42.931 "data_offset": 2048, 00:11:42.931 "data_size": 63488 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "name": "BaseBdev2", 00:11:42.931 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:42.931 "is_configured": true, 00:11:42.931 "data_offset": 2048, 00:11:42.931 "data_size": 63488 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "name": "BaseBdev3", 00:11:42.931 "uuid": "471cc269-cd2f-4689-9881-6d66a1f00d08", 00:11:42.931 "is_configured": true, 00:11:42.931 "data_offset": 2048, 00:11:42.931 "data_size": 63488 00:11:42.931 }, 00:11:42.931 { 00:11:42.931 "name": "BaseBdev4", 00:11:42.931 "uuid": "bdad0fc2-9906-4830-a24a-375d7430e63d", 00:11:42.931 "is_configured": true, 00:11:42.931 "data_offset": 2048, 00:11:42.931 "data_size": 63488 00:11:42.931 } 00:11:42.931 ] 00:11:42.931 } 00:11:42.931 } 00:11:42.931 }' 00:11:42.931 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.189 BaseBdev2 00:11:43.189 BaseBdev3 00:11:43.189 BaseBdev4' 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.189 08:49:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.189 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.190 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.190 [2024-09-28 08:49:21.183501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.449 "name": "Existed_Raid", 00:11:43.449 "uuid": "20a76fb8-65e9-4323-9db0-9f01cb912d76", 00:11:43.449 "strip_size_kb": 0, 00:11:43.449 "state": "online", 00:11:43.449 "raid_level": "raid1", 00:11:43.449 "superblock": true, 00:11:43.449 "num_base_bdevs": 4, 00:11:43.449 "num_base_bdevs_discovered": 3, 00:11:43.449 "num_base_bdevs_operational": 3, 00:11:43.449 "base_bdevs_list": [ 00:11:43.449 { 00:11:43.449 "name": null, 00:11:43.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.449 "is_configured": false, 00:11:43.449 "data_offset": 0, 00:11:43.449 "data_size": 63488 00:11:43.449 }, 00:11:43.449 { 00:11:43.449 "name": "BaseBdev2", 00:11:43.449 "uuid": "d520cde8-6fe3-438d-905d-34590f3af8ea", 00:11:43.449 "is_configured": true, 00:11:43.449 "data_offset": 2048, 00:11:43.449 "data_size": 63488 00:11:43.449 }, 00:11:43.449 { 00:11:43.449 "name": "BaseBdev3", 00:11:43.449 "uuid": "471cc269-cd2f-4689-9881-6d66a1f00d08", 00:11:43.449 "is_configured": true, 00:11:43.449 "data_offset": 2048, 00:11:43.449 "data_size": 63488 00:11:43.449 }, 00:11:43.449 { 00:11:43.449 "name": "BaseBdev4", 00:11:43.449 "uuid": "bdad0fc2-9906-4830-a24a-375d7430e63d", 00:11:43.449 "is_configured": true, 00:11:43.449 "data_offset": 2048, 00:11:43.449 "data_size": 63488 00:11:43.449 } 00:11:43.449 ] 00:11:43.449 }' 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.449 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 [2024-09-28 08:49:21.805771] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.018 08:49:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 [2024-09-28 08:49:21.963121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.276 [2024-09-28 08:49:22.126107] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:44.276 [2024-09-28 08:49:22.126223] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.276 [2024-09-28 08:49:22.225428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.276 [2024-09-28 08:49:22.225499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.276 [2024-09-28 08:49:22.225512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.276 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 BaseBdev2 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 [ 00:11:44.536 { 00:11:44.536 "name": "BaseBdev2", 00:11:44.536 "aliases": [ 00:11:44.536 "1827d834-bddd-4729-a110-7d1a71985129" 00:11:44.536 ], 00:11:44.536 "product_name": "Malloc disk", 00:11:44.536 "block_size": 512, 00:11:44.536 "num_blocks": 65536, 00:11:44.536 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:44.536 "assigned_rate_limits": { 00:11:44.536 "rw_ios_per_sec": 0, 00:11:44.536 "rw_mbytes_per_sec": 0, 00:11:44.536 "r_mbytes_per_sec": 0, 00:11:44.536 "w_mbytes_per_sec": 0 00:11:44.536 }, 00:11:44.536 "claimed": false, 00:11:44.536 "zoned": false, 00:11:44.536 "supported_io_types": { 00:11:44.536 "read": true, 00:11:44.536 "write": true, 00:11:44.536 "unmap": true, 00:11:44.536 "flush": true, 00:11:44.536 "reset": true, 00:11:44.536 "nvme_admin": false, 00:11:44.536 "nvme_io": false, 00:11:44.536 "nvme_io_md": false, 00:11:44.536 "write_zeroes": true, 00:11:44.536 "zcopy": true, 00:11:44.536 "get_zone_info": false, 00:11:44.536 "zone_management": false, 00:11:44.536 "zone_append": false, 00:11:44.536 "compare": false, 00:11:44.536 "compare_and_write": false, 00:11:44.536 "abort": true, 00:11:44.536 "seek_hole": false, 00:11:44.536 "seek_data": false, 00:11:44.536 "copy": true, 00:11:44.536 "nvme_iov_md": false 00:11:44.536 }, 00:11:44.536 "memory_domains": [ 00:11:44.536 { 00:11:44.536 "dma_device_id": "system", 00:11:44.536 "dma_device_type": 1 00:11:44.536 }, 00:11:44.536 { 00:11:44.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.536 "dma_device_type": 2 00:11:44.536 } 00:11:44.536 ], 00:11:44.536 "driver_specific": {} 00:11:44.536 } 00:11:44.536 ] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 BaseBdev3 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 [ 00:11:44.536 { 00:11:44.536 "name": "BaseBdev3", 00:11:44.536 "aliases": [ 00:11:44.536 "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d" 00:11:44.536 ], 00:11:44.536 "product_name": "Malloc disk", 00:11:44.536 "block_size": 512, 00:11:44.536 "num_blocks": 65536, 00:11:44.536 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:44.536 "assigned_rate_limits": { 00:11:44.536 "rw_ios_per_sec": 0, 00:11:44.536 "rw_mbytes_per_sec": 0, 00:11:44.536 "r_mbytes_per_sec": 0, 00:11:44.536 "w_mbytes_per_sec": 0 00:11:44.536 }, 00:11:44.536 "claimed": false, 00:11:44.536 "zoned": false, 00:11:44.536 "supported_io_types": { 00:11:44.536 "read": true, 00:11:44.536 "write": true, 00:11:44.536 "unmap": true, 00:11:44.536 "flush": true, 00:11:44.536 "reset": true, 00:11:44.536 "nvme_admin": false, 00:11:44.536 "nvme_io": false, 00:11:44.536 "nvme_io_md": false, 00:11:44.536 "write_zeroes": true, 00:11:44.536 "zcopy": true, 00:11:44.536 "get_zone_info": false, 00:11:44.536 "zone_management": false, 00:11:44.536 "zone_append": false, 00:11:44.536 "compare": false, 00:11:44.536 "compare_and_write": false, 00:11:44.536 "abort": true, 00:11:44.536 "seek_hole": false, 00:11:44.536 "seek_data": false, 00:11:44.536 "copy": true, 00:11:44.536 "nvme_iov_md": false 00:11:44.536 }, 00:11:44.536 "memory_domains": [ 00:11:44.536 { 00:11:44.536 "dma_device_id": "system", 00:11:44.536 "dma_device_type": 1 00:11:44.536 }, 00:11:44.536 { 00:11:44.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.536 "dma_device_type": 2 00:11:44.536 } 00:11:44.536 ], 00:11:44.536 "driver_specific": {} 00:11:44.536 } 00:11:44.536 ] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.536 BaseBdev4 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.536 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.537 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.537 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.537 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.537 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.537 [ 00:11:44.537 { 00:11:44.537 "name": "BaseBdev4", 00:11:44.537 "aliases": [ 00:11:44.537 "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca" 00:11:44.537 ], 00:11:44.537 "product_name": "Malloc disk", 00:11:44.537 "block_size": 512, 00:11:44.537 "num_blocks": 65536, 00:11:44.537 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:44.537 "assigned_rate_limits": { 00:11:44.537 "rw_ios_per_sec": 0, 00:11:44.537 "rw_mbytes_per_sec": 0, 00:11:44.537 "r_mbytes_per_sec": 0, 00:11:44.537 "w_mbytes_per_sec": 0 00:11:44.537 }, 00:11:44.537 "claimed": false, 00:11:44.537 "zoned": false, 00:11:44.537 "supported_io_types": { 00:11:44.537 "read": true, 00:11:44.537 "write": true, 00:11:44.537 "unmap": true, 00:11:44.537 "flush": true, 00:11:44.537 "reset": true, 00:11:44.537 "nvme_admin": false, 00:11:44.537 "nvme_io": false, 00:11:44.796 "nvme_io_md": false, 00:11:44.796 "write_zeroes": true, 00:11:44.796 "zcopy": true, 00:11:44.796 "get_zone_info": false, 00:11:44.796 "zone_management": false, 00:11:44.796 "zone_append": false, 00:11:44.796 "compare": false, 00:11:44.796 "compare_and_write": false, 00:11:44.796 "abort": true, 00:11:44.796 "seek_hole": false, 00:11:44.796 "seek_data": false, 00:11:44.796 "copy": true, 00:11:44.796 "nvme_iov_md": false 00:11:44.796 }, 00:11:44.796 "memory_domains": [ 00:11:44.796 { 00:11:44.796 "dma_device_id": "system", 00:11:44.796 "dma_device_type": 1 00:11:44.796 }, 00:11:44.796 { 00:11:44.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.796 "dma_device_type": 2 00:11:44.796 } 00:11:44.796 ], 00:11:44.796 "driver_specific": {} 00:11:44.796 } 00:11:44.796 ] 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.796 [2024-09-28 08:49:22.540819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.796 [2024-09-28 08:49:22.540923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.796 [2024-09-28 08:49:22.540992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.796 [2024-09-28 08:49:22.543031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.796 [2024-09-28 08:49:22.543139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.796 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.797 "name": "Existed_Raid", 00:11:44.797 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:44.797 "strip_size_kb": 0, 00:11:44.797 "state": "configuring", 00:11:44.797 "raid_level": "raid1", 00:11:44.797 "superblock": true, 00:11:44.797 "num_base_bdevs": 4, 00:11:44.797 "num_base_bdevs_discovered": 3, 00:11:44.797 "num_base_bdevs_operational": 4, 00:11:44.797 "base_bdevs_list": [ 00:11:44.797 { 00:11:44.797 "name": "BaseBdev1", 00:11:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.797 "is_configured": false, 00:11:44.797 "data_offset": 0, 00:11:44.797 "data_size": 0 00:11:44.797 }, 00:11:44.797 { 00:11:44.797 "name": "BaseBdev2", 00:11:44.797 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:44.797 "is_configured": true, 00:11:44.797 "data_offset": 2048, 00:11:44.797 "data_size": 63488 00:11:44.797 }, 00:11:44.797 { 00:11:44.797 "name": "BaseBdev3", 00:11:44.797 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:44.797 "is_configured": true, 00:11:44.797 "data_offset": 2048, 00:11:44.797 "data_size": 63488 00:11:44.797 }, 00:11:44.797 { 00:11:44.797 "name": "BaseBdev4", 00:11:44.797 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:44.797 "is_configured": true, 00:11:44.797 "data_offset": 2048, 00:11:44.797 "data_size": 63488 00:11:44.797 } 00:11:44.797 ] 00:11:44.797 }' 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.797 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.057 [2024-09-28 08:49:22.972057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.057 08:49:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.057 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.057 "name": "Existed_Raid", 00:11:45.057 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:45.057 "strip_size_kb": 0, 00:11:45.057 "state": "configuring", 00:11:45.057 "raid_level": "raid1", 00:11:45.057 "superblock": true, 00:11:45.057 "num_base_bdevs": 4, 00:11:45.057 "num_base_bdevs_discovered": 2, 00:11:45.057 "num_base_bdevs_operational": 4, 00:11:45.057 "base_bdevs_list": [ 00:11:45.057 { 00:11:45.057 "name": "BaseBdev1", 00:11:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.057 "is_configured": false, 00:11:45.057 "data_offset": 0, 00:11:45.057 "data_size": 0 00:11:45.057 }, 00:11:45.057 { 00:11:45.057 "name": null, 00:11:45.057 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:45.057 "is_configured": false, 00:11:45.057 "data_offset": 0, 00:11:45.057 "data_size": 63488 00:11:45.057 }, 00:11:45.057 { 00:11:45.057 "name": "BaseBdev3", 00:11:45.057 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:45.057 "is_configured": true, 00:11:45.057 "data_offset": 2048, 00:11:45.057 "data_size": 63488 00:11:45.057 }, 00:11:45.057 { 00:11:45.057 "name": "BaseBdev4", 00:11:45.057 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:45.057 "is_configured": true, 00:11:45.057 "data_offset": 2048, 00:11:45.057 "data_size": 63488 00:11:45.057 } 00:11:45.057 ] 00:11:45.057 }' 00:11:45.057 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.057 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.628 [2024-09-28 08:49:23.488740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.628 BaseBdev1 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.628 [ 00:11:45.628 { 00:11:45.628 "name": "BaseBdev1", 00:11:45.628 "aliases": [ 00:11:45.628 "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d" 00:11:45.628 ], 00:11:45.628 "product_name": "Malloc disk", 00:11:45.628 "block_size": 512, 00:11:45.628 "num_blocks": 65536, 00:11:45.628 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:45.628 "assigned_rate_limits": { 00:11:45.628 "rw_ios_per_sec": 0, 00:11:45.628 "rw_mbytes_per_sec": 0, 00:11:45.628 "r_mbytes_per_sec": 0, 00:11:45.628 "w_mbytes_per_sec": 0 00:11:45.628 }, 00:11:45.628 "claimed": true, 00:11:45.628 "claim_type": "exclusive_write", 00:11:45.628 "zoned": false, 00:11:45.628 "supported_io_types": { 00:11:45.628 "read": true, 00:11:45.628 "write": true, 00:11:45.628 "unmap": true, 00:11:45.628 "flush": true, 00:11:45.628 "reset": true, 00:11:45.628 "nvme_admin": false, 00:11:45.628 "nvme_io": false, 00:11:45.628 "nvme_io_md": false, 00:11:45.628 "write_zeroes": true, 00:11:45.628 "zcopy": true, 00:11:45.628 "get_zone_info": false, 00:11:45.628 "zone_management": false, 00:11:45.628 "zone_append": false, 00:11:45.628 "compare": false, 00:11:45.628 "compare_and_write": false, 00:11:45.628 "abort": true, 00:11:45.628 "seek_hole": false, 00:11:45.628 "seek_data": false, 00:11:45.628 "copy": true, 00:11:45.628 "nvme_iov_md": false 00:11:45.628 }, 00:11:45.628 "memory_domains": [ 00:11:45.628 { 00:11:45.628 "dma_device_id": "system", 00:11:45.628 "dma_device_type": 1 00:11:45.628 }, 00:11:45.628 { 00:11:45.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.628 "dma_device_type": 2 00:11:45.628 } 00:11:45.628 ], 00:11:45.628 "driver_specific": {} 00:11:45.628 } 00:11:45.628 ] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.628 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.629 "name": "Existed_Raid", 00:11:45.629 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:45.629 "strip_size_kb": 0, 00:11:45.629 "state": "configuring", 00:11:45.629 "raid_level": "raid1", 00:11:45.629 "superblock": true, 00:11:45.629 "num_base_bdevs": 4, 00:11:45.629 "num_base_bdevs_discovered": 3, 00:11:45.629 "num_base_bdevs_operational": 4, 00:11:45.629 "base_bdevs_list": [ 00:11:45.629 { 00:11:45.629 "name": "BaseBdev1", 00:11:45.629 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:45.629 "is_configured": true, 00:11:45.629 "data_offset": 2048, 00:11:45.629 "data_size": 63488 00:11:45.629 }, 00:11:45.629 { 00:11:45.629 "name": null, 00:11:45.629 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:45.629 "is_configured": false, 00:11:45.629 "data_offset": 0, 00:11:45.629 "data_size": 63488 00:11:45.629 }, 00:11:45.629 { 00:11:45.629 "name": "BaseBdev3", 00:11:45.629 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:45.629 "is_configured": true, 00:11:45.629 "data_offset": 2048, 00:11:45.629 "data_size": 63488 00:11:45.629 }, 00:11:45.629 { 00:11:45.629 "name": "BaseBdev4", 00:11:45.629 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:45.629 "is_configured": true, 00:11:45.629 "data_offset": 2048, 00:11:45.629 "data_size": 63488 00:11:45.629 } 00:11:45.629 ] 00:11:45.629 }' 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.629 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.198 [2024-09-28 08:49:23.979967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.198 08:49:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.198 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.198 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.198 "name": "Existed_Raid", 00:11:46.198 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:46.198 "strip_size_kb": 0, 00:11:46.198 "state": "configuring", 00:11:46.198 "raid_level": "raid1", 00:11:46.198 "superblock": true, 00:11:46.198 "num_base_bdevs": 4, 00:11:46.198 "num_base_bdevs_discovered": 2, 00:11:46.198 "num_base_bdevs_operational": 4, 00:11:46.198 "base_bdevs_list": [ 00:11:46.198 { 00:11:46.198 "name": "BaseBdev1", 00:11:46.198 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:46.198 "is_configured": true, 00:11:46.198 "data_offset": 2048, 00:11:46.198 "data_size": 63488 00:11:46.198 }, 00:11:46.198 { 00:11:46.198 "name": null, 00:11:46.198 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:46.198 "is_configured": false, 00:11:46.198 "data_offset": 0, 00:11:46.198 "data_size": 63488 00:11:46.198 }, 00:11:46.198 { 00:11:46.198 "name": null, 00:11:46.198 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:46.198 "is_configured": false, 00:11:46.198 "data_offset": 0, 00:11:46.198 "data_size": 63488 00:11:46.198 }, 00:11:46.198 { 00:11:46.198 "name": "BaseBdev4", 00:11:46.198 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:46.198 "is_configured": true, 00:11:46.198 "data_offset": 2048, 00:11:46.198 "data_size": 63488 00:11:46.198 } 00:11:46.198 ] 00:11:46.198 }' 00:11:46.198 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.198 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.458 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.458 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.458 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.458 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.717 [2024-09-28 08:49:24.471290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.717 "name": "Existed_Raid", 00:11:46.717 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:46.717 "strip_size_kb": 0, 00:11:46.717 "state": "configuring", 00:11:46.717 "raid_level": "raid1", 00:11:46.717 "superblock": true, 00:11:46.717 "num_base_bdevs": 4, 00:11:46.717 "num_base_bdevs_discovered": 3, 00:11:46.717 "num_base_bdevs_operational": 4, 00:11:46.717 "base_bdevs_list": [ 00:11:46.717 { 00:11:46.717 "name": "BaseBdev1", 00:11:46.717 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:46.717 "is_configured": true, 00:11:46.717 "data_offset": 2048, 00:11:46.717 "data_size": 63488 00:11:46.717 }, 00:11:46.717 { 00:11:46.717 "name": null, 00:11:46.717 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:46.717 "is_configured": false, 00:11:46.717 "data_offset": 0, 00:11:46.717 "data_size": 63488 00:11:46.717 }, 00:11:46.717 { 00:11:46.717 "name": "BaseBdev3", 00:11:46.717 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:46.717 "is_configured": true, 00:11:46.717 "data_offset": 2048, 00:11:46.717 "data_size": 63488 00:11:46.717 }, 00:11:46.717 { 00:11:46.717 "name": "BaseBdev4", 00:11:46.717 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:46.717 "is_configured": true, 00:11:46.717 "data_offset": 2048, 00:11:46.717 "data_size": 63488 00:11:46.717 } 00:11:46.717 ] 00:11:46.717 }' 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.717 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.976 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.977 08:49:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.977 [2024-09-28 08:49:24.954495] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.237 "name": "Existed_Raid", 00:11:47.237 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:47.237 "strip_size_kb": 0, 00:11:47.237 "state": "configuring", 00:11:47.237 "raid_level": "raid1", 00:11:47.237 "superblock": true, 00:11:47.237 "num_base_bdevs": 4, 00:11:47.237 "num_base_bdevs_discovered": 2, 00:11:47.237 "num_base_bdevs_operational": 4, 00:11:47.237 "base_bdevs_list": [ 00:11:47.237 { 00:11:47.237 "name": null, 00:11:47.237 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:47.237 "is_configured": false, 00:11:47.237 "data_offset": 0, 00:11:47.237 "data_size": 63488 00:11:47.237 }, 00:11:47.237 { 00:11:47.237 "name": null, 00:11:47.237 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:47.237 "is_configured": false, 00:11:47.237 "data_offset": 0, 00:11:47.237 "data_size": 63488 00:11:47.237 }, 00:11:47.237 { 00:11:47.237 "name": "BaseBdev3", 00:11:47.237 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:47.237 "is_configured": true, 00:11:47.237 "data_offset": 2048, 00:11:47.237 "data_size": 63488 00:11:47.237 }, 00:11:47.237 { 00:11:47.237 "name": "BaseBdev4", 00:11:47.237 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:47.237 "is_configured": true, 00:11:47.237 "data_offset": 2048, 00:11:47.237 "data_size": 63488 00:11:47.237 } 00:11:47.237 ] 00:11:47.237 }' 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.237 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.497 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.497 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.497 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.497 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.756 [2024-09-28 08:49:25.524818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.756 "name": "Existed_Raid", 00:11:47.756 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:47.756 "strip_size_kb": 0, 00:11:47.756 "state": "configuring", 00:11:47.756 "raid_level": "raid1", 00:11:47.756 "superblock": true, 00:11:47.756 "num_base_bdevs": 4, 00:11:47.756 "num_base_bdevs_discovered": 3, 00:11:47.756 "num_base_bdevs_operational": 4, 00:11:47.756 "base_bdevs_list": [ 00:11:47.756 { 00:11:47.756 "name": null, 00:11:47.756 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:47.756 "is_configured": false, 00:11:47.756 "data_offset": 0, 00:11:47.756 "data_size": 63488 00:11:47.756 }, 00:11:47.756 { 00:11:47.756 "name": "BaseBdev2", 00:11:47.756 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:47.756 "is_configured": true, 00:11:47.756 "data_offset": 2048, 00:11:47.756 "data_size": 63488 00:11:47.756 }, 00:11:47.756 { 00:11:47.756 "name": "BaseBdev3", 00:11:47.756 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:47.756 "is_configured": true, 00:11:47.756 "data_offset": 2048, 00:11:47.756 "data_size": 63488 00:11:47.756 }, 00:11:47.756 { 00:11:47.756 "name": "BaseBdev4", 00:11:47.756 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:47.756 "is_configured": true, 00:11:47.756 "data_offset": 2048, 00:11:47.756 "data_size": 63488 00:11:47.756 } 00:11:47.756 ] 00:11:47.756 }' 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.756 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.014 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.014 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.014 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.014 08:49:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.014 08:49:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.014 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.014 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.014 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.014 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.014 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f96dfefb-a7a7-44ba-bfac-2cb446b07a4d 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 [2024-09-28 08:49:26.097325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:48.274 [2024-09-28 08:49:26.097691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.274 [2024-09-28 08:49:26.097758] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.274 [2024-09-28 08:49:26.098068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:48.274 [2024-09-28 08:49:26.098273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.274 [2024-09-28 08:49:26.098316] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:48.274 NewBaseBdev 00:11:48.274 [2024-09-28 08:49:26.098495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.274 [ 00:11:48.274 { 00:11:48.274 "name": "NewBaseBdev", 00:11:48.274 "aliases": [ 00:11:48.274 "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d" 00:11:48.274 ], 00:11:48.274 "product_name": "Malloc disk", 00:11:48.274 "block_size": 512, 00:11:48.274 "num_blocks": 65536, 00:11:48.274 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:48.274 "assigned_rate_limits": { 00:11:48.274 "rw_ios_per_sec": 0, 00:11:48.274 "rw_mbytes_per_sec": 0, 00:11:48.274 "r_mbytes_per_sec": 0, 00:11:48.274 "w_mbytes_per_sec": 0 00:11:48.274 }, 00:11:48.274 "claimed": true, 00:11:48.274 "claim_type": "exclusive_write", 00:11:48.274 "zoned": false, 00:11:48.274 "supported_io_types": { 00:11:48.274 "read": true, 00:11:48.274 "write": true, 00:11:48.274 "unmap": true, 00:11:48.274 "flush": true, 00:11:48.274 "reset": true, 00:11:48.274 "nvme_admin": false, 00:11:48.274 "nvme_io": false, 00:11:48.274 "nvme_io_md": false, 00:11:48.274 "write_zeroes": true, 00:11:48.274 "zcopy": true, 00:11:48.274 "get_zone_info": false, 00:11:48.274 "zone_management": false, 00:11:48.274 "zone_append": false, 00:11:48.274 "compare": false, 00:11:48.274 "compare_and_write": false, 00:11:48.274 "abort": true, 00:11:48.274 "seek_hole": false, 00:11:48.274 "seek_data": false, 00:11:48.274 "copy": true, 00:11:48.274 "nvme_iov_md": false 00:11:48.274 }, 00:11:48.274 "memory_domains": [ 00:11:48.274 { 00:11:48.274 "dma_device_id": "system", 00:11:48.274 "dma_device_type": 1 00:11:48.274 }, 00:11:48.274 { 00:11:48.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.274 "dma_device_type": 2 00:11:48.274 } 00:11:48.274 ], 00:11:48.274 "driver_specific": {} 00:11:48.274 } 00:11:48.274 ] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.274 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.275 "name": "Existed_Raid", 00:11:48.275 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:48.275 "strip_size_kb": 0, 00:11:48.275 "state": "online", 00:11:48.275 "raid_level": "raid1", 00:11:48.275 "superblock": true, 00:11:48.275 "num_base_bdevs": 4, 00:11:48.275 "num_base_bdevs_discovered": 4, 00:11:48.275 "num_base_bdevs_operational": 4, 00:11:48.275 "base_bdevs_list": [ 00:11:48.275 { 00:11:48.275 "name": "NewBaseBdev", 00:11:48.275 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:48.275 "is_configured": true, 00:11:48.275 "data_offset": 2048, 00:11:48.275 "data_size": 63488 00:11:48.275 }, 00:11:48.275 { 00:11:48.275 "name": "BaseBdev2", 00:11:48.275 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:48.275 "is_configured": true, 00:11:48.275 "data_offset": 2048, 00:11:48.275 "data_size": 63488 00:11:48.275 }, 00:11:48.275 { 00:11:48.275 "name": "BaseBdev3", 00:11:48.275 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:48.275 "is_configured": true, 00:11:48.275 "data_offset": 2048, 00:11:48.275 "data_size": 63488 00:11:48.275 }, 00:11:48.275 { 00:11:48.275 "name": "BaseBdev4", 00:11:48.275 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:48.275 "is_configured": true, 00:11:48.275 "data_offset": 2048, 00:11:48.275 "data_size": 63488 00:11:48.275 } 00:11:48.275 ] 00:11:48.275 }' 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.275 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.843 [2024-09-28 08:49:26.542283] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.843 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.843 "name": "Existed_Raid", 00:11:48.843 "aliases": [ 00:11:48.843 "26b206c2-094c-4d95-9400-399609bd5002" 00:11:48.843 ], 00:11:48.843 "product_name": "Raid Volume", 00:11:48.843 "block_size": 512, 00:11:48.843 "num_blocks": 63488, 00:11:48.843 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:48.843 "assigned_rate_limits": { 00:11:48.843 "rw_ios_per_sec": 0, 00:11:48.843 "rw_mbytes_per_sec": 0, 00:11:48.843 "r_mbytes_per_sec": 0, 00:11:48.843 "w_mbytes_per_sec": 0 00:11:48.843 }, 00:11:48.843 "claimed": false, 00:11:48.843 "zoned": false, 00:11:48.843 "supported_io_types": { 00:11:48.843 "read": true, 00:11:48.843 "write": true, 00:11:48.843 "unmap": false, 00:11:48.843 "flush": false, 00:11:48.843 "reset": true, 00:11:48.843 "nvme_admin": false, 00:11:48.843 "nvme_io": false, 00:11:48.843 "nvme_io_md": false, 00:11:48.843 "write_zeroes": true, 00:11:48.843 "zcopy": false, 00:11:48.843 "get_zone_info": false, 00:11:48.843 "zone_management": false, 00:11:48.843 "zone_append": false, 00:11:48.843 "compare": false, 00:11:48.843 "compare_and_write": false, 00:11:48.843 "abort": false, 00:11:48.843 "seek_hole": false, 00:11:48.844 "seek_data": false, 00:11:48.844 "copy": false, 00:11:48.844 "nvme_iov_md": false 00:11:48.844 }, 00:11:48.844 "memory_domains": [ 00:11:48.844 { 00:11:48.844 "dma_device_id": "system", 00:11:48.844 "dma_device_type": 1 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.844 "dma_device_type": 2 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "system", 00:11:48.844 "dma_device_type": 1 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.844 "dma_device_type": 2 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "system", 00:11:48.844 "dma_device_type": 1 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.844 "dma_device_type": 2 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "system", 00:11:48.844 "dma_device_type": 1 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.844 "dma_device_type": 2 00:11:48.844 } 00:11:48.844 ], 00:11:48.844 "driver_specific": { 00:11:48.844 "raid": { 00:11:48.844 "uuid": "26b206c2-094c-4d95-9400-399609bd5002", 00:11:48.844 "strip_size_kb": 0, 00:11:48.844 "state": "online", 00:11:48.844 "raid_level": "raid1", 00:11:48.844 "superblock": true, 00:11:48.844 "num_base_bdevs": 4, 00:11:48.844 "num_base_bdevs_discovered": 4, 00:11:48.844 "num_base_bdevs_operational": 4, 00:11:48.844 "base_bdevs_list": [ 00:11:48.844 { 00:11:48.844 "name": "NewBaseBdev", 00:11:48.844 "uuid": "f96dfefb-a7a7-44ba-bfac-2cb446b07a4d", 00:11:48.844 "is_configured": true, 00:11:48.844 "data_offset": 2048, 00:11:48.844 "data_size": 63488 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "name": "BaseBdev2", 00:11:48.844 "uuid": "1827d834-bddd-4729-a110-7d1a71985129", 00:11:48.844 "is_configured": true, 00:11:48.844 "data_offset": 2048, 00:11:48.844 "data_size": 63488 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "name": "BaseBdev3", 00:11:48.844 "uuid": "97b94e4d-b53c-42bb-b7db-1ef2245d5d2d", 00:11:48.844 "is_configured": true, 00:11:48.844 "data_offset": 2048, 00:11:48.844 "data_size": 63488 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "name": "BaseBdev4", 00:11:48.844 "uuid": "1802e954-ed36-4bb3-a8f6-3325d9a4c3ca", 00:11:48.844 "is_configured": true, 00:11:48.844 "data_offset": 2048, 00:11:48.844 "data_size": 63488 00:11:48.844 } 00:11:48.844 ] 00:11:48.844 } 00:11:48.844 } 00:11:48.844 }' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:48.844 BaseBdev2 00:11:48.844 BaseBdev3 00:11:48.844 BaseBdev4' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.105 [2024-09-28 08:49:26.857391] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.105 [2024-09-28 08:49:26.857450] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.105 [2024-09-28 08:49:26.857607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.105 [2024-09-28 08:49:26.858012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.105 [2024-09-28 08:49:26.858083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73865 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73865 ']' 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73865 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73865 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73865' 00:11:49.105 killing process with pid 73865 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73865 00:11:49.105 [2024-09-28 08:49:26.908085] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.105 08:49:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73865 00:11:49.365 [2024-09-28 08:49:27.325958] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.746 08:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.746 00:11:50.746 real 0m11.734s 00:11:50.746 user 0m18.267s 00:11:50.746 sys 0m2.212s 00:11:50.746 08:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.746 08:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.746 ************************************ 00:11:50.746 END TEST raid_state_function_test_sb 00:11:50.746 ************************************ 00:11:50.746 08:49:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:50.746 08:49:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:50.746 08:49:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.746 08:49:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.746 ************************************ 00:11:50.746 START TEST raid_superblock_test 00:11:50.746 ************************************ 00:11:50.746 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:50.746 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:50.746 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:50.746 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:50.746 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74535 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74535 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74535 ']' 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.747 08:49:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 [2024-09-28 08:49:28.814367] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:51.007 [2024-09-28 08:49:28.814620] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74535 ] 00:11:51.007 [2024-09-28 08:49:28.983747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.266 [2024-09-28 08:49:29.230720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.526 [2024-09-28 08:49:29.461510] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.526 [2024-09-28 08:49:29.461640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.787 malloc1 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.787 [2024-09-28 08:49:29.684541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:51.787 [2024-09-28 08:49:29.684861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.787 [2024-09-28 08:49:29.684931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.787 [2024-09-28 08:49:29.685032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.787 [2024-09-28 08:49:29.687385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.787 [2024-09-28 08:49:29.687449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.787 pt1 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.787 malloc2 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.787 [2024-09-28 08:49:29.775572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:51.787 [2024-09-28 08:49:29.775675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.787 [2024-09-28 08:49:29.775717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.787 [2024-09-28 08:49:29.775751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.787 [2024-09-28 08:49:29.778117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.787 [2024-09-28 08:49:29.778185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:51.787 pt2 00:11:51.787 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 malloc3 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 [2024-09-28 08:49:29.837095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:52.047 [2024-09-28 08:49:29.837195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.047 [2024-09-28 08:49:29.837233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:52.047 [2024-09-28 08:49:29.837260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.047 [2024-09-28 08:49:29.839599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.047 [2024-09-28 08:49:29.839672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:52.047 pt3 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 malloc4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 [2024-09-28 08:49:29.896740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:52.047 [2024-09-28 08:49:29.896837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.047 [2024-09-28 08:49:29.896873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:52.047 [2024-09-28 08:49:29.896900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.047 [2024-09-28 08:49:29.899197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.047 [2024-09-28 08:49:29.899279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:52.047 pt4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 [2024-09-28 08:49:29.908783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:52.047 [2024-09-28 08:49:29.910833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.047 [2024-09-28 08:49:29.910967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.047 [2024-09-28 08:49:29.911048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:52.047 [2024-09-28 08:49:29.911291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:52.047 [2024-09-28 08:49:29.911340] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.047 [2024-09-28 08:49:29.911624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.047 [2024-09-28 08:49:29.911848] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:52.047 [2024-09-28 08:49:29.911898] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:52.047 [2024-09-28 08:49:29.912078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.047 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.047 "name": "raid_bdev1", 00:11:52.047 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:52.047 "strip_size_kb": 0, 00:11:52.047 "state": "online", 00:11:52.047 "raid_level": "raid1", 00:11:52.047 "superblock": true, 00:11:52.047 "num_base_bdevs": 4, 00:11:52.047 "num_base_bdevs_discovered": 4, 00:11:52.047 "num_base_bdevs_operational": 4, 00:11:52.047 "base_bdevs_list": [ 00:11:52.047 { 00:11:52.047 "name": "pt1", 00:11:52.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.047 "is_configured": true, 00:11:52.047 "data_offset": 2048, 00:11:52.048 "data_size": 63488 00:11:52.048 }, 00:11:52.048 { 00:11:52.048 "name": "pt2", 00:11:52.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.048 "is_configured": true, 00:11:52.048 "data_offset": 2048, 00:11:52.048 "data_size": 63488 00:11:52.048 }, 00:11:52.048 { 00:11:52.048 "name": "pt3", 00:11:52.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.048 "is_configured": true, 00:11:52.048 "data_offset": 2048, 00:11:52.048 "data_size": 63488 00:11:52.048 }, 00:11:52.048 { 00:11:52.048 "name": "pt4", 00:11:52.048 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.048 "is_configured": true, 00:11:52.048 "data_offset": 2048, 00:11:52.048 "data_size": 63488 00:11:52.048 } 00:11:52.048 ] 00:11:52.048 }' 00:11:52.048 08:49:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.048 08:49:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.625 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.626 [2024-09-28 08:49:30.348336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.626 "name": "raid_bdev1", 00:11:52.626 "aliases": [ 00:11:52.626 "67becc16-85b5-4c2d-b3f2-1f5e217cfb24" 00:11:52.626 ], 00:11:52.626 "product_name": "Raid Volume", 00:11:52.626 "block_size": 512, 00:11:52.626 "num_blocks": 63488, 00:11:52.626 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:52.626 "assigned_rate_limits": { 00:11:52.626 "rw_ios_per_sec": 0, 00:11:52.626 "rw_mbytes_per_sec": 0, 00:11:52.626 "r_mbytes_per_sec": 0, 00:11:52.626 "w_mbytes_per_sec": 0 00:11:52.626 }, 00:11:52.626 "claimed": false, 00:11:52.626 "zoned": false, 00:11:52.626 "supported_io_types": { 00:11:52.626 "read": true, 00:11:52.626 "write": true, 00:11:52.626 "unmap": false, 00:11:52.626 "flush": false, 00:11:52.626 "reset": true, 00:11:52.626 "nvme_admin": false, 00:11:52.626 "nvme_io": false, 00:11:52.626 "nvme_io_md": false, 00:11:52.626 "write_zeroes": true, 00:11:52.626 "zcopy": false, 00:11:52.626 "get_zone_info": false, 00:11:52.626 "zone_management": false, 00:11:52.626 "zone_append": false, 00:11:52.626 "compare": false, 00:11:52.626 "compare_and_write": false, 00:11:52.626 "abort": false, 00:11:52.626 "seek_hole": false, 00:11:52.626 "seek_data": false, 00:11:52.626 "copy": false, 00:11:52.626 "nvme_iov_md": false 00:11:52.626 }, 00:11:52.626 "memory_domains": [ 00:11:52.626 { 00:11:52.626 "dma_device_id": "system", 00:11:52.626 "dma_device_type": 1 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.626 "dma_device_type": 2 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "system", 00:11:52.626 "dma_device_type": 1 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.626 "dma_device_type": 2 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "system", 00:11:52.626 "dma_device_type": 1 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.626 "dma_device_type": 2 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "system", 00:11:52.626 "dma_device_type": 1 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.626 "dma_device_type": 2 00:11:52.626 } 00:11:52.626 ], 00:11:52.626 "driver_specific": { 00:11:52.626 "raid": { 00:11:52.626 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:52.626 "strip_size_kb": 0, 00:11:52.626 "state": "online", 00:11:52.626 "raid_level": "raid1", 00:11:52.626 "superblock": true, 00:11:52.626 "num_base_bdevs": 4, 00:11:52.626 "num_base_bdevs_discovered": 4, 00:11:52.626 "num_base_bdevs_operational": 4, 00:11:52.626 "base_bdevs_list": [ 00:11:52.626 { 00:11:52.626 "name": "pt1", 00:11:52.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.626 "is_configured": true, 00:11:52.626 "data_offset": 2048, 00:11:52.626 "data_size": 63488 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "name": "pt2", 00:11:52.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.626 "is_configured": true, 00:11:52.626 "data_offset": 2048, 00:11:52.626 "data_size": 63488 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "name": "pt3", 00:11:52.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.626 "is_configured": true, 00:11:52.626 "data_offset": 2048, 00:11:52.626 "data_size": 63488 00:11:52.626 }, 00:11:52.626 { 00:11:52.626 "name": "pt4", 00:11:52.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.626 "is_configured": true, 00:11:52.626 "data_offset": 2048, 00:11:52.626 "data_size": 63488 00:11:52.626 } 00:11:52.626 ] 00:11:52.626 } 00:11:52.626 } 00:11:52.626 }' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:52.626 pt2 00:11:52.626 pt3 00:11:52.626 pt4' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.626 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.886 [2024-09-28 08:49:30.687635] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=67becc16-85b5-4c2d-b3f2-1f5e217cfb24 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 67becc16-85b5-4c2d-b3f2-1f5e217cfb24 ']' 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.886 [2024-09-28 08:49:30.723300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.886 [2024-09-28 08:49:30.723359] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.886 [2024-09-28 08:49:30.723456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.886 [2024-09-28 08:49:30.723563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.886 [2024-09-28 08:49:30.723644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.886 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.887 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.146 [2024-09-28 08:49:30.883098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:53.146 [2024-09-28 08:49:30.885309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:53.146 [2024-09-28 08:49:30.885402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:53.146 [2024-09-28 08:49:30.885454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:53.146 [2024-09-28 08:49:30.885534] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:53.146 [2024-09-28 08:49:30.885615] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:53.146 [2024-09-28 08:49:30.885679] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:53.147 [2024-09-28 08:49:30.885729] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:53.147 [2024-09-28 08:49:30.885776] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.147 [2024-09-28 08:49:30.885818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:53.147 request: 00:11:53.147 { 00:11:53.147 "name": "raid_bdev1", 00:11:53.147 "raid_level": "raid1", 00:11:53.147 "base_bdevs": [ 00:11:53.147 "malloc1", 00:11:53.147 "malloc2", 00:11:53.147 "malloc3", 00:11:53.147 "malloc4" 00:11:53.147 ], 00:11:53.147 "superblock": false, 00:11:53.147 "method": "bdev_raid_create", 00:11:53.147 "req_id": 1 00:11:53.147 } 00:11:53.147 Got JSON-RPC error response 00:11:53.147 response: 00:11:53.147 { 00:11:53.147 "code": -17, 00:11:53.147 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:53.147 } 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.147 [2024-09-28 08:49:30.950960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.147 [2024-09-28 08:49:30.951055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.147 [2024-09-28 08:49:30.951086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:53.147 [2024-09-28 08:49:30.951115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.147 [2024-09-28 08:49:30.953528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.147 [2024-09-28 08:49:30.953616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.147 [2024-09-28 08:49:30.953715] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:53.147 [2024-09-28 08:49:30.953794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.147 pt1 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.147 "name": "raid_bdev1", 00:11:53.147 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:53.147 "strip_size_kb": 0, 00:11:53.147 "state": "configuring", 00:11:53.147 "raid_level": "raid1", 00:11:53.147 "superblock": true, 00:11:53.147 "num_base_bdevs": 4, 00:11:53.147 "num_base_bdevs_discovered": 1, 00:11:53.147 "num_base_bdevs_operational": 4, 00:11:53.147 "base_bdevs_list": [ 00:11:53.147 { 00:11:53.147 "name": "pt1", 00:11:53.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.147 "is_configured": true, 00:11:53.147 "data_offset": 2048, 00:11:53.147 "data_size": 63488 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": null, 00:11:53.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.147 "is_configured": false, 00:11:53.147 "data_offset": 2048, 00:11:53.147 "data_size": 63488 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": null, 00:11:53.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.147 "is_configured": false, 00:11:53.147 "data_offset": 2048, 00:11:53.147 "data_size": 63488 00:11:53.147 }, 00:11:53.147 { 00:11:53.147 "name": null, 00:11:53.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.147 "is_configured": false, 00:11:53.147 "data_offset": 2048, 00:11:53.147 "data_size": 63488 00:11:53.147 } 00:11:53.147 ] 00:11:53.147 }' 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.147 08:49:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.407 [2024-09-28 08:49:31.394235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:53.407 [2024-09-28 08:49:31.394342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.407 [2024-09-28 08:49:31.394391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:53.407 [2024-09-28 08:49:31.394427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.407 [2024-09-28 08:49:31.394945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.407 [2024-09-28 08:49:31.395004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:53.407 [2024-09-28 08:49:31.395114] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:53.407 [2024-09-28 08:49:31.395182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:53.407 pt2 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:53.407 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.667 [2024-09-28 08:49:31.406213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.667 "name": "raid_bdev1", 00:11:53.667 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:53.667 "strip_size_kb": 0, 00:11:53.667 "state": "configuring", 00:11:53.667 "raid_level": "raid1", 00:11:53.667 "superblock": true, 00:11:53.667 "num_base_bdevs": 4, 00:11:53.667 "num_base_bdevs_discovered": 1, 00:11:53.667 "num_base_bdevs_operational": 4, 00:11:53.667 "base_bdevs_list": [ 00:11:53.667 { 00:11:53.667 "name": "pt1", 00:11:53.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.667 "is_configured": true, 00:11:53.667 "data_offset": 2048, 00:11:53.667 "data_size": 63488 00:11:53.667 }, 00:11:53.667 { 00:11:53.667 "name": null, 00:11:53.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.667 "is_configured": false, 00:11:53.667 "data_offset": 0, 00:11:53.667 "data_size": 63488 00:11:53.667 }, 00:11:53.667 { 00:11:53.667 "name": null, 00:11:53.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.667 "is_configured": false, 00:11:53.667 "data_offset": 2048, 00:11:53.667 "data_size": 63488 00:11:53.667 }, 00:11:53.667 { 00:11:53.667 "name": null, 00:11:53.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.667 "is_configured": false, 00:11:53.667 "data_offset": 2048, 00:11:53.667 "data_size": 63488 00:11:53.667 } 00:11:53.667 ] 00:11:53.667 }' 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.667 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.928 [2024-09-28 08:49:31.877391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:53.928 [2024-09-28 08:49:31.877484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.928 [2024-09-28 08:49:31.877528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:53.928 [2024-09-28 08:49:31.877560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.928 [2024-09-28 08:49:31.878049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.928 [2024-09-28 08:49:31.878103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:53.928 [2024-09-28 08:49:31.878215] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:53.928 [2024-09-28 08:49:31.878274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:53.928 pt2 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.928 [2024-09-28 08:49:31.889356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:53.928 [2024-09-28 08:49:31.889434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.928 [2024-09-28 08:49:31.889466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:53.928 [2024-09-28 08:49:31.889492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.928 [2024-09-28 08:49:31.889899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.928 [2024-09-28 08:49:31.889950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:53.928 [2024-09-28 08:49:31.890031] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:53.928 [2024-09-28 08:49:31.890071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:53.928 pt3 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.928 [2024-09-28 08:49:31.901301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:53.928 [2024-09-28 08:49:31.901385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.928 [2024-09-28 08:49:31.901416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:53.928 [2024-09-28 08:49:31.901442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.928 [2024-09-28 08:49:31.901838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.928 [2024-09-28 08:49:31.901889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:53.928 [2024-09-28 08:49:31.901972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:53.928 [2024-09-28 08:49:31.902028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:53.928 [2024-09-28 08:49:31.902195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.928 [2024-09-28 08:49:31.902233] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.928 [2024-09-28 08:49:31.902503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:53.928 [2024-09-28 08:49:31.902708] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.928 [2024-09-28 08:49:31.902755] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:53.928 [2024-09-28 08:49:31.902905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.928 pt4 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.928 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.188 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.188 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.188 "name": "raid_bdev1", 00:11:54.188 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:54.188 "strip_size_kb": 0, 00:11:54.188 "state": "online", 00:11:54.188 "raid_level": "raid1", 00:11:54.188 "superblock": true, 00:11:54.188 "num_base_bdevs": 4, 00:11:54.188 "num_base_bdevs_discovered": 4, 00:11:54.188 "num_base_bdevs_operational": 4, 00:11:54.188 "base_bdevs_list": [ 00:11:54.188 { 00:11:54.188 "name": "pt1", 00:11:54.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.188 "is_configured": true, 00:11:54.188 "data_offset": 2048, 00:11:54.188 "data_size": 63488 00:11:54.188 }, 00:11:54.188 { 00:11:54.188 "name": "pt2", 00:11:54.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.188 "is_configured": true, 00:11:54.188 "data_offset": 2048, 00:11:54.188 "data_size": 63488 00:11:54.188 }, 00:11:54.188 { 00:11:54.188 "name": "pt3", 00:11:54.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.188 "is_configured": true, 00:11:54.188 "data_offset": 2048, 00:11:54.188 "data_size": 63488 00:11:54.188 }, 00:11:54.188 { 00:11:54.188 "name": "pt4", 00:11:54.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.188 "is_configured": true, 00:11:54.188 "data_offset": 2048, 00:11:54.188 "data_size": 63488 00:11:54.188 } 00:11:54.188 ] 00:11:54.188 }' 00:11:54.188 08:49:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.188 08:49:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.447 [2024-09-28 08:49:32.360877] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.447 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.447 "name": "raid_bdev1", 00:11:54.447 "aliases": [ 00:11:54.447 "67becc16-85b5-4c2d-b3f2-1f5e217cfb24" 00:11:54.447 ], 00:11:54.447 "product_name": "Raid Volume", 00:11:54.447 "block_size": 512, 00:11:54.447 "num_blocks": 63488, 00:11:54.447 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:54.447 "assigned_rate_limits": { 00:11:54.447 "rw_ios_per_sec": 0, 00:11:54.447 "rw_mbytes_per_sec": 0, 00:11:54.447 "r_mbytes_per_sec": 0, 00:11:54.447 "w_mbytes_per_sec": 0 00:11:54.447 }, 00:11:54.447 "claimed": false, 00:11:54.447 "zoned": false, 00:11:54.447 "supported_io_types": { 00:11:54.447 "read": true, 00:11:54.447 "write": true, 00:11:54.447 "unmap": false, 00:11:54.447 "flush": false, 00:11:54.447 "reset": true, 00:11:54.447 "nvme_admin": false, 00:11:54.447 "nvme_io": false, 00:11:54.447 "nvme_io_md": false, 00:11:54.447 "write_zeroes": true, 00:11:54.447 "zcopy": false, 00:11:54.447 "get_zone_info": false, 00:11:54.447 "zone_management": false, 00:11:54.447 "zone_append": false, 00:11:54.447 "compare": false, 00:11:54.447 "compare_and_write": false, 00:11:54.447 "abort": false, 00:11:54.447 "seek_hole": false, 00:11:54.447 "seek_data": false, 00:11:54.447 "copy": false, 00:11:54.447 "nvme_iov_md": false 00:11:54.447 }, 00:11:54.447 "memory_domains": [ 00:11:54.447 { 00:11:54.447 "dma_device_id": "system", 00:11:54.447 "dma_device_type": 1 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.447 "dma_device_type": 2 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "system", 00:11:54.447 "dma_device_type": 1 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.447 "dma_device_type": 2 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "system", 00:11:54.447 "dma_device_type": 1 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.447 "dma_device_type": 2 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "system", 00:11:54.447 "dma_device_type": 1 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.447 "dma_device_type": 2 00:11:54.447 } 00:11:54.447 ], 00:11:54.447 "driver_specific": { 00:11:54.447 "raid": { 00:11:54.447 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:54.447 "strip_size_kb": 0, 00:11:54.447 "state": "online", 00:11:54.447 "raid_level": "raid1", 00:11:54.447 "superblock": true, 00:11:54.447 "num_base_bdevs": 4, 00:11:54.447 "num_base_bdevs_discovered": 4, 00:11:54.447 "num_base_bdevs_operational": 4, 00:11:54.447 "base_bdevs_list": [ 00:11:54.447 { 00:11:54.447 "name": "pt1", 00:11:54.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.447 "is_configured": true, 00:11:54.447 "data_offset": 2048, 00:11:54.447 "data_size": 63488 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "name": "pt2", 00:11:54.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.447 "is_configured": true, 00:11:54.447 "data_offset": 2048, 00:11:54.447 "data_size": 63488 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "name": "pt3", 00:11:54.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.447 "is_configured": true, 00:11:54.447 "data_offset": 2048, 00:11:54.447 "data_size": 63488 00:11:54.447 }, 00:11:54.447 { 00:11:54.447 "name": "pt4", 00:11:54.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.447 "is_configured": true, 00:11:54.447 "data_offset": 2048, 00:11:54.447 "data_size": 63488 00:11:54.447 } 00:11:54.447 ] 00:11:54.447 } 00:11:54.447 } 00:11:54.447 }' 00:11:54.448 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:54.707 pt2 00:11:54.707 pt3 00:11:54.707 pt4' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.707 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:54.708 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:54.708 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.708 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.708 [2024-09-28 08:49:32.700226] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 67becc16-85b5-4c2d-b3f2-1f5e217cfb24 '!=' 67becc16-85b5-4c2d-b3f2-1f5e217cfb24 ']' 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.967 [2024-09-28 08:49:32.735917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.967 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.967 "name": "raid_bdev1", 00:11:54.967 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:54.967 "strip_size_kb": 0, 00:11:54.967 "state": "online", 00:11:54.967 "raid_level": "raid1", 00:11:54.967 "superblock": true, 00:11:54.967 "num_base_bdevs": 4, 00:11:54.967 "num_base_bdevs_discovered": 3, 00:11:54.967 "num_base_bdevs_operational": 3, 00:11:54.967 "base_bdevs_list": [ 00:11:54.967 { 00:11:54.967 "name": null, 00:11:54.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.967 "is_configured": false, 00:11:54.967 "data_offset": 0, 00:11:54.967 "data_size": 63488 00:11:54.967 }, 00:11:54.967 { 00:11:54.967 "name": "pt2", 00:11:54.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.967 "is_configured": true, 00:11:54.967 "data_offset": 2048, 00:11:54.968 "data_size": 63488 00:11:54.968 }, 00:11:54.968 { 00:11:54.968 "name": "pt3", 00:11:54.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.968 "is_configured": true, 00:11:54.968 "data_offset": 2048, 00:11:54.968 "data_size": 63488 00:11:54.968 }, 00:11:54.968 { 00:11:54.968 "name": "pt4", 00:11:54.968 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.968 "is_configured": true, 00:11:54.968 "data_offset": 2048, 00:11:54.968 "data_size": 63488 00:11:54.968 } 00:11:54.968 ] 00:11:54.968 }' 00:11:54.968 08:49:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.968 08:49:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 [2024-09-28 08:49:33.183129] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.228 [2024-09-28 08:49:33.183219] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.228 [2024-09-28 08:49:33.183317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.228 [2024-09-28 08:49:33.183432] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.228 [2024-09-28 08:49:33.183481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:55.228 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.488 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 [2024-09-28 08:49:33.262972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.488 [2024-09-28 08:49:33.263068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.488 [2024-09-28 08:49:33.263104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:55.488 [2024-09-28 08:49:33.263136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.488 [2024-09-28 08:49:33.265578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.489 [2024-09-28 08:49:33.265660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.489 [2024-09-28 08:49:33.265798] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.489 [2024-09-28 08:49:33.265868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.489 pt2 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.489 "name": "raid_bdev1", 00:11:55.489 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:55.489 "strip_size_kb": 0, 00:11:55.489 "state": "configuring", 00:11:55.489 "raid_level": "raid1", 00:11:55.489 "superblock": true, 00:11:55.489 "num_base_bdevs": 4, 00:11:55.489 "num_base_bdevs_discovered": 1, 00:11:55.489 "num_base_bdevs_operational": 3, 00:11:55.489 "base_bdevs_list": [ 00:11:55.489 { 00:11:55.489 "name": null, 00:11:55.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.489 "is_configured": false, 00:11:55.489 "data_offset": 2048, 00:11:55.489 "data_size": 63488 00:11:55.489 }, 00:11:55.489 { 00:11:55.489 "name": "pt2", 00:11:55.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.489 "is_configured": true, 00:11:55.489 "data_offset": 2048, 00:11:55.489 "data_size": 63488 00:11:55.489 }, 00:11:55.489 { 00:11:55.489 "name": null, 00:11:55.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.489 "is_configured": false, 00:11:55.489 "data_offset": 2048, 00:11:55.489 "data_size": 63488 00:11:55.489 }, 00:11:55.489 { 00:11:55.489 "name": null, 00:11:55.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.489 "is_configured": false, 00:11:55.489 "data_offset": 2048, 00:11:55.489 "data_size": 63488 00:11:55.489 } 00:11:55.489 ] 00:11:55.489 }' 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.489 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.749 [2024-09-28 08:49:33.722231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:55.749 [2024-09-28 08:49:33.722346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.749 [2024-09-28 08:49:33.722388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:55.749 [2024-09-28 08:49:33.722415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.749 [2024-09-28 08:49:33.722960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.749 [2024-09-28 08:49:33.723016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:55.749 [2024-09-28 08:49:33.723145] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:55.749 [2024-09-28 08:49:33.723207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.749 pt3 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.749 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.008 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.008 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.008 "name": "raid_bdev1", 00:11:56.008 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:56.008 "strip_size_kb": 0, 00:11:56.008 "state": "configuring", 00:11:56.008 "raid_level": "raid1", 00:11:56.008 "superblock": true, 00:11:56.008 "num_base_bdevs": 4, 00:11:56.008 "num_base_bdevs_discovered": 2, 00:11:56.008 "num_base_bdevs_operational": 3, 00:11:56.008 "base_bdevs_list": [ 00:11:56.008 { 00:11:56.008 "name": null, 00:11:56.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.008 "is_configured": false, 00:11:56.008 "data_offset": 2048, 00:11:56.008 "data_size": 63488 00:11:56.008 }, 00:11:56.008 { 00:11:56.008 "name": "pt2", 00:11:56.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.008 "is_configured": true, 00:11:56.008 "data_offset": 2048, 00:11:56.008 "data_size": 63488 00:11:56.008 }, 00:11:56.008 { 00:11:56.008 "name": "pt3", 00:11:56.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.008 "is_configured": true, 00:11:56.008 "data_offset": 2048, 00:11:56.008 "data_size": 63488 00:11:56.008 }, 00:11:56.008 { 00:11:56.008 "name": null, 00:11:56.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.008 "is_configured": false, 00:11:56.008 "data_offset": 2048, 00:11:56.008 "data_size": 63488 00:11:56.008 } 00:11:56.008 ] 00:11:56.008 }' 00:11:56.008 08:49:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.008 08:49:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.268 [2024-09-28 08:49:34.133513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:56.268 [2024-09-28 08:49:34.133618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.268 [2024-09-28 08:49:34.133672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:56.268 [2024-09-28 08:49:34.133720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.268 [2024-09-28 08:49:34.134264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.268 [2024-09-28 08:49:34.134325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:56.268 [2024-09-28 08:49:34.134441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:56.268 [2024-09-28 08:49:34.134507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:56.268 [2024-09-28 08:49:34.134763] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:56.268 [2024-09-28 08:49:34.134809] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.268 [2024-09-28 08:49:34.135122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:56.268 [2024-09-28 08:49:34.135380] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:56.268 [2024-09-28 08:49:34.135435] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:56.268 [2024-09-28 08:49:34.135679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.268 pt4 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.268 "name": "raid_bdev1", 00:11:56.268 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:56.268 "strip_size_kb": 0, 00:11:56.268 "state": "online", 00:11:56.268 "raid_level": "raid1", 00:11:56.268 "superblock": true, 00:11:56.268 "num_base_bdevs": 4, 00:11:56.268 "num_base_bdevs_discovered": 3, 00:11:56.268 "num_base_bdevs_operational": 3, 00:11:56.268 "base_bdevs_list": [ 00:11:56.268 { 00:11:56.268 "name": null, 00:11:56.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.268 "is_configured": false, 00:11:56.268 "data_offset": 2048, 00:11:56.268 "data_size": 63488 00:11:56.268 }, 00:11:56.268 { 00:11:56.268 "name": "pt2", 00:11:56.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.268 "is_configured": true, 00:11:56.268 "data_offset": 2048, 00:11:56.268 "data_size": 63488 00:11:56.268 }, 00:11:56.268 { 00:11:56.268 "name": "pt3", 00:11:56.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.268 "is_configured": true, 00:11:56.268 "data_offset": 2048, 00:11:56.268 "data_size": 63488 00:11:56.268 }, 00:11:56.268 { 00:11:56.268 "name": "pt4", 00:11:56.268 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.268 "is_configured": true, 00:11:56.268 "data_offset": 2048, 00:11:56.268 "data_size": 63488 00:11:56.268 } 00:11:56.268 ] 00:11:56.268 }' 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.268 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.837 [2024-09-28 08:49:34.548763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.837 [2024-09-28 08:49:34.548826] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:56.837 [2024-09-28 08:49:34.548934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.837 [2024-09-28 08:49:34.549022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.837 [2024-09-28 08:49:34.549072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.837 [2024-09-28 08:49:34.620657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.837 [2024-09-28 08:49:34.620766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.837 [2024-09-28 08:49:34.620801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:56.837 [2024-09-28 08:49:34.620831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.837 [2024-09-28 08:49:34.623337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.837 [2024-09-28 08:49:34.623411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.837 [2024-09-28 08:49:34.623492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:56.837 [2024-09-28 08:49:34.623556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.837 [2024-09-28 08:49:34.623705] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:56.837 [2024-09-28 08:49:34.623722] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:56.837 [2024-09-28 08:49:34.623740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:56.837 [2024-09-28 08:49:34.623812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.837 [2024-09-28 08:49:34.623921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:56.837 pt1 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.837 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.838 "name": "raid_bdev1", 00:11:56.838 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:56.838 "strip_size_kb": 0, 00:11:56.838 "state": "configuring", 00:11:56.838 "raid_level": "raid1", 00:11:56.838 "superblock": true, 00:11:56.838 "num_base_bdevs": 4, 00:11:56.838 "num_base_bdevs_discovered": 2, 00:11:56.838 "num_base_bdevs_operational": 3, 00:11:56.838 "base_bdevs_list": [ 00:11:56.838 { 00:11:56.838 "name": null, 00:11:56.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.838 "is_configured": false, 00:11:56.838 "data_offset": 2048, 00:11:56.838 "data_size": 63488 00:11:56.838 }, 00:11:56.838 { 00:11:56.838 "name": "pt2", 00:11:56.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.838 "is_configured": true, 00:11:56.838 "data_offset": 2048, 00:11:56.838 "data_size": 63488 00:11:56.838 }, 00:11:56.838 { 00:11:56.838 "name": "pt3", 00:11:56.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.838 "is_configured": true, 00:11:56.838 "data_offset": 2048, 00:11:56.838 "data_size": 63488 00:11:56.838 }, 00:11:56.838 { 00:11:56.838 "name": null, 00:11:56.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.838 "is_configured": false, 00:11:56.838 "data_offset": 2048, 00:11:56.838 "data_size": 63488 00:11:56.838 } 00:11:56.838 ] 00:11:56.838 }' 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.838 08:49:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.097 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:57.097 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:57.097 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.097 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.097 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.355 [2024-09-28 08:49:35.103859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:57.355 [2024-09-28 08:49:35.103959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.355 [2024-09-28 08:49:35.103999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:57.355 [2024-09-28 08:49:35.104028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.355 [2024-09-28 08:49:35.104522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.355 [2024-09-28 08:49:35.104576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:57.355 [2024-09-28 08:49:35.104699] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:57.355 [2024-09-28 08:49:35.104752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:57.355 [2024-09-28 08:49:35.104925] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:57.355 [2024-09-28 08:49:35.104961] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.355 [2024-09-28 08:49:35.105245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:57.355 [2024-09-28 08:49:35.105423] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:57.355 [2024-09-28 08:49:35.105464] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:57.355 [2024-09-28 08:49:35.105669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.355 pt4 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:57.355 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.356 "name": "raid_bdev1", 00:11:57.356 "uuid": "67becc16-85b5-4c2d-b3f2-1f5e217cfb24", 00:11:57.356 "strip_size_kb": 0, 00:11:57.356 "state": "online", 00:11:57.356 "raid_level": "raid1", 00:11:57.356 "superblock": true, 00:11:57.356 "num_base_bdevs": 4, 00:11:57.356 "num_base_bdevs_discovered": 3, 00:11:57.356 "num_base_bdevs_operational": 3, 00:11:57.356 "base_bdevs_list": [ 00:11:57.356 { 00:11:57.356 "name": null, 00:11:57.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.356 "is_configured": false, 00:11:57.356 "data_offset": 2048, 00:11:57.356 "data_size": 63488 00:11:57.356 }, 00:11:57.356 { 00:11:57.356 "name": "pt2", 00:11:57.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.356 "is_configured": true, 00:11:57.356 "data_offset": 2048, 00:11:57.356 "data_size": 63488 00:11:57.356 }, 00:11:57.356 { 00:11:57.356 "name": "pt3", 00:11:57.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.356 "is_configured": true, 00:11:57.356 "data_offset": 2048, 00:11:57.356 "data_size": 63488 00:11:57.356 }, 00:11:57.356 { 00:11:57.356 "name": "pt4", 00:11:57.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.356 "is_configured": true, 00:11:57.356 "data_offset": 2048, 00:11:57.356 "data_size": 63488 00:11:57.356 } 00:11:57.356 ] 00:11:57.356 }' 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.356 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:57.614 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.614 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.614 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:57.614 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.874 [2024-09-28 08:49:35.639187] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 67becc16-85b5-4c2d-b3f2-1f5e217cfb24 '!=' 67becc16-85b5-4c2d-b3f2-1f5e217cfb24 ']' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74535 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74535 ']' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74535 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74535 00:11:57.874 killing process with pid 74535 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74535' 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74535 00:11:57.874 [2024-09-28 08:49:35.718348] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.874 [2024-09-28 08:49:35.718442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.874 08:49:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74535 00:11:57.874 [2024-09-28 08:49:35.718522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.874 [2024-09-28 08:49:35.718535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:58.443 [2024-09-28 08:49:36.134857] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.822 08:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:59.822 00:11:59.822 real 0m8.734s 00:11:59.822 user 0m13.447s 00:11:59.822 sys 0m1.691s 00:11:59.822 08:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.822 08:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.822 ************************************ 00:11:59.822 END TEST raid_superblock_test 00:11:59.822 ************************************ 00:11:59.822 08:49:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:59.822 08:49:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:59.823 08:49:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.823 08:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.823 ************************************ 00:11:59.823 START TEST raid_read_error_test 00:11:59.823 ************************************ 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZBMCoWk0lo 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75028 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75028 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75028 ']' 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.823 08:49:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.823 [2024-09-28 08:49:37.631202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:59.823 [2024-09-28 08:49:37.631449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75028 ] 00:11:59.823 [2024-09-28 08:49:37.800394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.082 [2024-09-28 08:49:38.041897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.342 [2024-09-28 08:49:38.260977] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.342 [2024-09-28 08:49:38.261020] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.602 BaseBdev1_malloc 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.602 true 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.602 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.602 [2024-09-28 08:49:38.515203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:00.602 [2024-09-28 08:49:38.515302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.602 [2024-09-28 08:49:38.515324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:00.603 [2024-09-28 08:49:38.515336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.603 [2024-09-28 08:49:38.517790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.603 [2024-09-28 08:49:38.517827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:00.603 BaseBdev1 00:12:00.603 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.603 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.603 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:00.603 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.603 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 BaseBdev2_malloc 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 true 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 [2024-09-28 08:49:38.616626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.863 [2024-09-28 08:49:38.616707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.863 [2024-09-28 08:49:38.616725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:00.863 [2024-09-28 08:49:38.616736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.863 [2024-09-28 08:49:38.619034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.863 [2024-09-28 08:49:38.619071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.863 BaseBdev2 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 BaseBdev3_malloc 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 true 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.863 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.863 [2024-09-28 08:49:38.687876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.864 [2024-09-28 08:49:38.687926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.864 [2024-09-28 08:49:38.687959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:00.864 [2024-09-28 08:49:38.687971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.864 [2024-09-28 08:49:38.690348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.864 [2024-09-28 08:49:38.690434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.864 BaseBdev3 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 BaseBdev4_malloc 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 true 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 [2024-09-28 08:49:38.758148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:00.864 [2024-09-28 08:49:38.758196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.864 [2024-09-28 08:49:38.758230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:00.864 [2024-09-28 08:49:38.758243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.864 [2024-09-28 08:49:38.760627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.864 [2024-09-28 08:49:38.760731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.864 BaseBdev4 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 [2024-09-28 08:49:38.770231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.864 [2024-09-28 08:49:38.772323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.864 [2024-09-28 08:49:38.772399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.864 [2024-09-28 08:49:38.772456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.864 [2024-09-28 08:49:38.772713] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:00.864 [2024-09-28 08:49:38.772728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.864 [2024-09-28 08:49:38.772953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:00.864 [2024-09-28 08:49:38.773133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:00.864 [2024-09-28 08:49:38.773148] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:00.864 [2024-09-28 08:49:38.773299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.864 "name": "raid_bdev1", 00:12:00.864 "uuid": "377d3bf3-11df-4984-a152-d1aa425ba4df", 00:12:00.864 "strip_size_kb": 0, 00:12:00.864 "state": "online", 00:12:00.864 "raid_level": "raid1", 00:12:00.864 "superblock": true, 00:12:00.864 "num_base_bdevs": 4, 00:12:00.864 "num_base_bdevs_discovered": 4, 00:12:00.864 "num_base_bdevs_operational": 4, 00:12:00.864 "base_bdevs_list": [ 00:12:00.864 { 00:12:00.864 "name": "BaseBdev1", 00:12:00.864 "uuid": "ec17fe39-a248-5b42-8b45-07d6fa1f821d", 00:12:00.864 "is_configured": true, 00:12:00.864 "data_offset": 2048, 00:12:00.864 "data_size": 63488 00:12:00.864 }, 00:12:00.864 { 00:12:00.864 "name": "BaseBdev2", 00:12:00.864 "uuid": "06ffd272-d2b9-5654-9011-dd40eafae9f6", 00:12:00.864 "is_configured": true, 00:12:00.864 "data_offset": 2048, 00:12:00.864 "data_size": 63488 00:12:00.864 }, 00:12:00.864 { 00:12:00.864 "name": "BaseBdev3", 00:12:00.864 "uuid": "f9c6274f-0fae-55fb-9991-0d2a6bc5adad", 00:12:00.864 "is_configured": true, 00:12:00.864 "data_offset": 2048, 00:12:00.864 "data_size": 63488 00:12:00.864 }, 00:12:00.864 { 00:12:00.864 "name": "BaseBdev4", 00:12:00.864 "uuid": "42e60ba5-bfbc-5f17-a9da-30ccac143439", 00:12:00.864 "is_configured": true, 00:12:00.864 "data_offset": 2048, 00:12:00.864 "data_size": 63488 00:12:00.864 } 00:12:00.864 ] 00:12:00.864 }' 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.864 08:49:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.433 08:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.433 08:49:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.433 [2024-09-28 08:49:39.266821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.372 "name": "raid_bdev1", 00:12:02.372 "uuid": "377d3bf3-11df-4984-a152-d1aa425ba4df", 00:12:02.372 "strip_size_kb": 0, 00:12:02.372 "state": "online", 00:12:02.372 "raid_level": "raid1", 00:12:02.372 "superblock": true, 00:12:02.372 "num_base_bdevs": 4, 00:12:02.372 "num_base_bdevs_discovered": 4, 00:12:02.372 "num_base_bdevs_operational": 4, 00:12:02.372 "base_bdevs_list": [ 00:12:02.372 { 00:12:02.372 "name": "BaseBdev1", 00:12:02.372 "uuid": "ec17fe39-a248-5b42-8b45-07d6fa1f821d", 00:12:02.372 "is_configured": true, 00:12:02.372 "data_offset": 2048, 00:12:02.372 "data_size": 63488 00:12:02.372 }, 00:12:02.372 { 00:12:02.372 "name": "BaseBdev2", 00:12:02.372 "uuid": "06ffd272-d2b9-5654-9011-dd40eafae9f6", 00:12:02.372 "is_configured": true, 00:12:02.372 "data_offset": 2048, 00:12:02.372 "data_size": 63488 00:12:02.372 }, 00:12:02.372 { 00:12:02.372 "name": "BaseBdev3", 00:12:02.372 "uuid": "f9c6274f-0fae-55fb-9991-0d2a6bc5adad", 00:12:02.372 "is_configured": true, 00:12:02.372 "data_offset": 2048, 00:12:02.372 "data_size": 63488 00:12:02.372 }, 00:12:02.372 { 00:12:02.372 "name": "BaseBdev4", 00:12:02.372 "uuid": "42e60ba5-bfbc-5f17-a9da-30ccac143439", 00:12:02.372 "is_configured": true, 00:12:02.372 "data_offset": 2048, 00:12:02.372 "data_size": 63488 00:12:02.372 } 00:12:02.372 ] 00:12:02.372 }' 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.372 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.941 [2024-09-28 08:49:40.635265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.941 [2024-09-28 08:49:40.635367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.941 [2024-09-28 08:49:40.638081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.941 [2024-09-28 08:49:40.638187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.941 [2024-09-28 08:49:40.638350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.941 [2024-09-28 08:49:40.638404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.941 { 00:12:02.941 "results": [ 00:12:02.941 { 00:12:02.941 "job": "raid_bdev1", 00:12:02.941 "core_mask": "0x1", 00:12:02.941 "workload": "randrw", 00:12:02.941 "percentage": 50, 00:12:02.941 "status": "finished", 00:12:02.941 "queue_depth": 1, 00:12:02.941 "io_size": 131072, 00:12:02.941 "runtime": 1.369012, 00:12:02.941 "iops": 8233.675088311862, 00:12:02.941 "mibps": 1029.2093860389828, 00:12:02.941 "io_failed": 0, 00:12:02.941 "io_timeout": 0, 00:12:02.941 "avg_latency_us": 119.00189982675315, 00:12:02.941 "min_latency_us": 21.910917030567685, 00:12:02.941 "max_latency_us": 1430.9170305676855 00:12:02.941 } 00:12:02.941 ], 00:12:02.941 "core_count": 1 00:12:02.941 } 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75028 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75028 ']' 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75028 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75028 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.941 killing process with pid 75028 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75028' 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75028 00:12:02.941 [2024-09-28 08:49:40.685556] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.941 08:49:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75028 00:12:03.201 [2024-09-28 08:49:41.024948] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZBMCoWk0lo 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:04.583 ************************************ 00:12:04.583 END TEST raid_read_error_test 00:12:04.583 ************************************ 00:12:04.583 00:12:04.583 real 0m4.905s 00:12:04.583 user 0m5.552s 00:12:04.583 sys 0m0.729s 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.583 08:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.583 08:49:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:04.583 08:49:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:04.583 08:49:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.583 08:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.583 ************************************ 00:12:04.583 START TEST raid_write_error_test 00:12:04.583 ************************************ 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.583 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VbexP0Mz2l 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75173 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75173 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75173 ']' 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.584 08:49:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.843 [2024-09-28 08:49:42.605815] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:04.843 [2024-09-28 08:49:42.606012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75173 ] 00:12:04.843 [2024-09-28 08:49:42.775782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.103 [2024-09-28 08:49:43.016143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.362 [2024-09-28 08:49:43.244970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.362 [2024-09-28 08:49:43.245122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 BaseBdev1_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 true 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 [2024-09-28 08:49:43.498984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.623 [2024-09-28 08:49:43.499046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.623 [2024-09-28 08:49:43.499064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.623 [2024-09-28 08:49:43.499075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.623 [2024-09-28 08:49:43.501566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.623 [2024-09-28 08:49:43.501660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.623 BaseBdev1 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 BaseBdev2_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 true 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.623 [2024-09-28 08:49:43.576464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.623 [2024-09-28 08:49:43.576587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.623 [2024-09-28 08:49:43.576607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.623 [2024-09-28 08:49:43.576618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.623 [2024-09-28 08:49:43.578943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.623 [2024-09-28 08:49:43.578979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.623 BaseBdev2 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.623 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 BaseBdev3_malloc 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 true 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 [2024-09-28 08:49:43.649577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:05.889 [2024-09-28 08:49:43.649682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.889 [2024-09-28 08:49:43.649706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:05.889 [2024-09-28 08:49:43.649719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.889 [2024-09-28 08:49:43.652361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.889 [2024-09-28 08:49:43.652423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:05.889 BaseBdev3 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 BaseBdev4_malloc 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 true 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 [2024-09-28 08:49:43.721409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:05.889 [2024-09-28 08:49:43.721461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.889 [2024-09-28 08:49:43.721494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.889 [2024-09-28 08:49:43.721507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.889 [2024-09-28 08:49:43.723844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.889 [2024-09-28 08:49:43.723883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:05.889 BaseBdev4 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.889 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.889 [2024-09-28 08:49:43.733467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.889 [2024-09-28 08:49:43.735571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.889 [2024-09-28 08:49:43.735716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.889 [2024-09-28 08:49:43.735802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.889 [2024-09-28 08:49:43.736077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:05.889 [2024-09-28 08:49:43.736126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.890 [2024-09-28 08:49:43.736389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.890 [2024-09-28 08:49:43.736590] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:05.890 [2024-09-28 08:49:43.736603] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:05.890 [2024-09-28 08:49:43.736775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.890 "name": "raid_bdev1", 00:12:05.890 "uuid": "f2e6558a-cce8-4bdc-bf8a-43dc93fcd18b", 00:12:05.890 "strip_size_kb": 0, 00:12:05.890 "state": "online", 00:12:05.890 "raid_level": "raid1", 00:12:05.890 "superblock": true, 00:12:05.890 "num_base_bdevs": 4, 00:12:05.890 "num_base_bdevs_discovered": 4, 00:12:05.890 "num_base_bdevs_operational": 4, 00:12:05.890 "base_bdevs_list": [ 00:12:05.890 { 00:12:05.890 "name": "BaseBdev1", 00:12:05.890 "uuid": "bc0c9a77-a5d7-5e50-a039-b2f5d91108c1", 00:12:05.890 "is_configured": true, 00:12:05.890 "data_offset": 2048, 00:12:05.890 "data_size": 63488 00:12:05.890 }, 00:12:05.890 { 00:12:05.890 "name": "BaseBdev2", 00:12:05.890 "uuid": "5e14326b-c9cd-5276-a4ef-1b7b525b15ed", 00:12:05.890 "is_configured": true, 00:12:05.890 "data_offset": 2048, 00:12:05.890 "data_size": 63488 00:12:05.890 }, 00:12:05.890 { 00:12:05.890 "name": "BaseBdev3", 00:12:05.890 "uuid": "29b66959-33c0-5888-b443-2a57a282fa85", 00:12:05.890 "is_configured": true, 00:12:05.890 "data_offset": 2048, 00:12:05.890 "data_size": 63488 00:12:05.890 }, 00:12:05.890 { 00:12:05.890 "name": "BaseBdev4", 00:12:05.890 "uuid": "a7e5d0f6-457c-5c1c-b7d7-f7724fd8dc08", 00:12:05.890 "is_configured": true, 00:12:05.890 "data_offset": 2048, 00:12:05.890 "data_size": 63488 00:12:05.890 } 00:12:05.890 ] 00:12:05.890 }' 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.890 08:49:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.469 08:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.469 08:49:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.469 [2024-09-28 08:49:44.273876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.410 [2024-09-28 08:49:45.188995] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:07.410 [2024-09-28 08:49:45.189143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.410 [2024-09-28 08:49:45.189394] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.410 "name": "raid_bdev1", 00:12:07.410 "uuid": "f2e6558a-cce8-4bdc-bf8a-43dc93fcd18b", 00:12:07.410 "strip_size_kb": 0, 00:12:07.410 "state": "online", 00:12:07.410 "raid_level": "raid1", 00:12:07.410 "superblock": true, 00:12:07.410 "num_base_bdevs": 4, 00:12:07.410 "num_base_bdevs_discovered": 3, 00:12:07.410 "num_base_bdevs_operational": 3, 00:12:07.410 "base_bdevs_list": [ 00:12:07.410 { 00:12:07.410 "name": null, 00:12:07.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.410 "is_configured": false, 00:12:07.410 "data_offset": 0, 00:12:07.410 "data_size": 63488 00:12:07.410 }, 00:12:07.410 { 00:12:07.410 "name": "BaseBdev2", 00:12:07.410 "uuid": "5e14326b-c9cd-5276-a4ef-1b7b525b15ed", 00:12:07.410 "is_configured": true, 00:12:07.410 "data_offset": 2048, 00:12:07.410 "data_size": 63488 00:12:07.410 }, 00:12:07.410 { 00:12:07.410 "name": "BaseBdev3", 00:12:07.410 "uuid": "29b66959-33c0-5888-b443-2a57a282fa85", 00:12:07.410 "is_configured": true, 00:12:07.410 "data_offset": 2048, 00:12:07.410 "data_size": 63488 00:12:07.410 }, 00:12:07.410 { 00:12:07.410 "name": "BaseBdev4", 00:12:07.410 "uuid": "a7e5d0f6-457c-5c1c-b7d7-f7724fd8dc08", 00:12:07.410 "is_configured": true, 00:12:07.410 "data_offset": 2048, 00:12:07.410 "data_size": 63488 00:12:07.410 } 00:12:07.410 ] 00:12:07.410 }' 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.410 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.669 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.669 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.669 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.669 [2024-09-28 08:49:45.625588] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.669 [2024-09-28 08:49:45.625668] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.669 [2024-09-28 08:49:45.628185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.669 [2024-09-28 08:49:45.628239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.669 [2024-09-28 08:49:45.628348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.669 [2024-09-28 08:49:45.628358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:07.669 { 00:12:07.669 "results": [ 00:12:07.669 { 00:12:07.670 "job": "raid_bdev1", 00:12:07.670 "core_mask": "0x1", 00:12:07.670 "workload": "randrw", 00:12:07.670 "percentage": 50, 00:12:07.670 "status": "finished", 00:12:07.670 "queue_depth": 1, 00:12:07.670 "io_size": 131072, 00:12:07.670 "runtime": 1.352168, 00:12:07.670 "iops": 9023.287047171652, 00:12:07.670 "mibps": 1127.9108808964565, 00:12:07.670 "io_failed": 0, 00:12:07.670 "io_timeout": 0, 00:12:07.670 "avg_latency_us": 108.35492516362571, 00:12:07.670 "min_latency_us": 22.581659388646287, 00:12:07.670 "max_latency_us": 1552.5449781659388 00:12:07.670 } 00:12:07.670 ], 00:12:07.670 "core_count": 1 00:12:07.670 } 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75173 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75173 ']' 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75173 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.670 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75173 00:12:07.930 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.930 killing process with pid 75173 00:12:07.930 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.930 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75173' 00:12:07.930 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75173 00:12:07.930 [2024-09-28 08:49:45.682161] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.930 08:49:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75173 00:12:08.190 [2024-09-28 08:49:46.028798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VbexP0Mz2l 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:09.572 ************************************ 00:12:09.572 END TEST raid_write_error_test 00:12:09.572 ************************************ 00:12:09.572 00:12:09.572 real 0m4.927s 00:12:09.572 user 0m5.616s 00:12:09.572 sys 0m0.741s 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.572 08:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.572 08:49:47 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:09.572 08:49:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:09.572 08:49:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:09.572 08:49:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:09.572 08:49:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.572 08:49:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.572 ************************************ 00:12:09.572 START TEST raid_rebuild_test 00:12:09.572 ************************************ 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:09.572 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75317 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75317 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75317 ']' 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.573 08:49:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.833 [2024-09-28 08:49:47.613699] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:09.833 [2024-09-28 08:49:47.613915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.833 Zero copy mechanism will not be used. 00:12:09.833 -allocations --file-prefix=spdk_pid75317 ] 00:12:09.833 [2024-09-28 08:49:47.779301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.092 [2024-09-28 08:49:48.031588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.352 [2024-09-28 08:49:48.263179] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.352 [2024-09-28 08:49:48.263217] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 BaseBdev1_malloc 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 [2024-09-28 08:49:48.469119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.612 [2024-09-28 08:49:48.469244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.612 [2024-09-28 08:49:48.469293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:10.612 [2024-09-28 08:49:48.469348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.612 [2024-09-28 08:49:48.471835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.612 [2024-09-28 08:49:48.471903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.612 BaseBdev1 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 BaseBdev2_malloc 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.612 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 [2024-09-28 08:49:48.556956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:10.612 [2024-09-28 08:49:48.557069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.612 [2024-09-28 08:49:48.557095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:10.612 [2024-09-28 08:49:48.557109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.612 [2024-09-28 08:49:48.559497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.613 [2024-09-28 08:49:48.559536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.613 BaseBdev2 00:12:10.613 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.613 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:10.613 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.613 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 spare_malloc 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 spare_delay 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 [2024-09-28 08:49:48.628826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:10.872 [2024-09-28 08:49:48.628931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.872 [2024-09-28 08:49:48.628954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:10.872 [2024-09-28 08:49:48.628965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.872 [2024-09-28 08:49:48.631318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.872 [2024-09-28 08:49:48.631353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:10.872 spare 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:10.872 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.873 [2024-09-28 08:49:48.640849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.873 [2024-09-28 08:49:48.642843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.873 [2024-09-28 08:49:48.642977] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:10.873 [2024-09-28 08:49:48.642994] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.873 [2024-09-28 08:49:48.643265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.873 [2024-09-28 08:49:48.643416] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:10.873 [2024-09-28 08:49:48.643425] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:10.873 [2024-09-28 08:49:48.643581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.873 "name": "raid_bdev1", 00:12:10.873 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:10.873 "strip_size_kb": 0, 00:12:10.873 "state": "online", 00:12:10.873 "raid_level": "raid1", 00:12:10.873 "superblock": false, 00:12:10.873 "num_base_bdevs": 2, 00:12:10.873 "num_base_bdevs_discovered": 2, 00:12:10.873 "num_base_bdevs_operational": 2, 00:12:10.873 "base_bdevs_list": [ 00:12:10.873 { 00:12:10.873 "name": "BaseBdev1", 00:12:10.873 "uuid": "844b1ba1-a994-52ef-8f3c-aa87b321d365", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 0, 00:12:10.873 "data_size": 65536 00:12:10.873 }, 00:12:10.873 { 00:12:10.873 "name": "BaseBdev2", 00:12:10.873 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 0, 00:12:10.873 "data_size": 65536 00:12:10.873 } 00:12:10.873 ] 00:12:10.873 }' 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.873 08:49:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.131 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.131 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:11.131 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.131 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.131 [2024-09-28 08:49:49.088385] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.131 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:11.391 [2024-09-28 08:49:49.343730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:11.391 /dev/nbd0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.391 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.650 1+0 records in 00:12:11.650 1+0 records out 00:12:11.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439168 s, 9.3 MB/s 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:11.650 08:49:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:15.844 65536+0 records in 00:12:15.844 65536+0 records out 00:12:15.844 33554432 bytes (34 MB, 32 MiB) copied, 3.68439 s, 9.1 MB/s 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.844 [2024-09-28 08:49:53.301911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.844 [2024-09-28 08:49:53.317977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.844 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.845 "name": "raid_bdev1", 00:12:15.845 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:15.845 "strip_size_kb": 0, 00:12:15.845 "state": "online", 00:12:15.845 "raid_level": "raid1", 00:12:15.845 "superblock": false, 00:12:15.845 "num_base_bdevs": 2, 00:12:15.845 "num_base_bdevs_discovered": 1, 00:12:15.845 "num_base_bdevs_operational": 1, 00:12:15.845 "base_bdevs_list": [ 00:12:15.845 { 00:12:15.845 "name": null, 00:12:15.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.845 "is_configured": false, 00:12:15.845 "data_offset": 0, 00:12:15.845 "data_size": 65536 00:12:15.845 }, 00:12:15.845 { 00:12:15.845 "name": "BaseBdev2", 00:12:15.845 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:15.845 "is_configured": true, 00:12:15.845 "data_offset": 0, 00:12:15.845 "data_size": 65536 00:12:15.845 } 00:12:15.845 ] 00:12:15.845 }' 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.845 [2024-09-28 08:49:53.749272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.845 [2024-09-28 08:49:53.766866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.845 08:49:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:15.845 [2024-09-28 08:49:53.769006] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.783 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.783 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.783 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.783 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.783 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.043 "name": "raid_bdev1", 00:12:17.043 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:17.043 "strip_size_kb": 0, 00:12:17.043 "state": "online", 00:12:17.043 "raid_level": "raid1", 00:12:17.043 "superblock": false, 00:12:17.043 "num_base_bdevs": 2, 00:12:17.043 "num_base_bdevs_discovered": 2, 00:12:17.043 "num_base_bdevs_operational": 2, 00:12:17.043 "process": { 00:12:17.043 "type": "rebuild", 00:12:17.043 "target": "spare", 00:12:17.043 "progress": { 00:12:17.043 "blocks": 20480, 00:12:17.043 "percent": 31 00:12:17.043 } 00:12:17.043 }, 00:12:17.043 "base_bdevs_list": [ 00:12:17.043 { 00:12:17.043 "name": "spare", 00:12:17.043 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:17.043 "is_configured": true, 00:12:17.043 "data_offset": 0, 00:12:17.043 "data_size": 65536 00:12:17.043 }, 00:12:17.043 { 00:12:17.043 "name": "BaseBdev2", 00:12:17.043 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:17.043 "is_configured": true, 00:12:17.043 "data_offset": 0, 00:12:17.043 "data_size": 65536 00:12:17.043 } 00:12:17.043 ] 00:12:17.043 }' 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.043 08:49:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.043 [2024-09-28 08:49:54.904069] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.043 [2024-09-28 08:49:54.977631] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.043 [2024-09-28 08:49:54.977747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.043 [2024-09-28 08:49:54.977786] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.043 [2024-09-28 08:49:54.977811] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.043 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.303 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.303 "name": "raid_bdev1", 00:12:17.303 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:17.303 "strip_size_kb": 0, 00:12:17.303 "state": "online", 00:12:17.303 "raid_level": "raid1", 00:12:17.303 "superblock": false, 00:12:17.303 "num_base_bdevs": 2, 00:12:17.303 "num_base_bdevs_discovered": 1, 00:12:17.303 "num_base_bdevs_operational": 1, 00:12:17.303 "base_bdevs_list": [ 00:12:17.303 { 00:12:17.303 "name": null, 00:12:17.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.303 "is_configured": false, 00:12:17.303 "data_offset": 0, 00:12:17.303 "data_size": 65536 00:12:17.303 }, 00:12:17.303 { 00:12:17.303 "name": "BaseBdev2", 00:12:17.303 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:17.303 "is_configured": true, 00:12:17.303 "data_offset": 0, 00:12:17.303 "data_size": 65536 00:12:17.303 } 00:12:17.303 ] 00:12:17.303 }' 00:12:17.303 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.303 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.563 "name": "raid_bdev1", 00:12:17.563 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:17.563 "strip_size_kb": 0, 00:12:17.563 "state": "online", 00:12:17.563 "raid_level": "raid1", 00:12:17.563 "superblock": false, 00:12:17.563 "num_base_bdevs": 2, 00:12:17.563 "num_base_bdevs_discovered": 1, 00:12:17.563 "num_base_bdevs_operational": 1, 00:12:17.563 "base_bdevs_list": [ 00:12:17.563 { 00:12:17.563 "name": null, 00:12:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.563 "is_configured": false, 00:12:17.563 "data_offset": 0, 00:12:17.563 "data_size": 65536 00:12:17.563 }, 00:12:17.563 { 00:12:17.563 "name": "BaseBdev2", 00:12:17.563 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:17.563 "is_configured": true, 00:12:17.563 "data_offset": 0, 00:12:17.563 "data_size": 65536 00:12:17.563 } 00:12:17.563 ] 00:12:17.563 }' 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.563 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.563 [2024-09-28 08:49:55.540552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.563 [2024-09-28 08:49:55.556453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:17.822 08:49:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.822 08:49:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:17.822 [2024-09-28 08:49:55.558589] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.797 "name": "raid_bdev1", 00:12:18.797 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:18.797 "strip_size_kb": 0, 00:12:18.797 "state": "online", 00:12:18.797 "raid_level": "raid1", 00:12:18.797 "superblock": false, 00:12:18.797 "num_base_bdevs": 2, 00:12:18.797 "num_base_bdevs_discovered": 2, 00:12:18.797 "num_base_bdevs_operational": 2, 00:12:18.797 "process": { 00:12:18.797 "type": "rebuild", 00:12:18.797 "target": "spare", 00:12:18.797 "progress": { 00:12:18.797 "blocks": 20480, 00:12:18.797 "percent": 31 00:12:18.797 } 00:12:18.797 }, 00:12:18.797 "base_bdevs_list": [ 00:12:18.797 { 00:12:18.797 "name": "spare", 00:12:18.797 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:18.797 "is_configured": true, 00:12:18.797 "data_offset": 0, 00:12:18.797 "data_size": 65536 00:12:18.797 }, 00:12:18.797 { 00:12:18.797 "name": "BaseBdev2", 00:12:18.797 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:18.797 "is_configured": true, 00:12:18.797 "data_offset": 0, 00:12:18.797 "data_size": 65536 00:12:18.797 } 00:12:18.797 ] 00:12:18.797 }' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.797 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.798 08:49:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.798 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.798 "name": "raid_bdev1", 00:12:18.798 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:18.798 "strip_size_kb": 0, 00:12:18.798 "state": "online", 00:12:18.798 "raid_level": "raid1", 00:12:18.798 "superblock": false, 00:12:18.798 "num_base_bdevs": 2, 00:12:18.798 "num_base_bdevs_discovered": 2, 00:12:18.798 "num_base_bdevs_operational": 2, 00:12:18.798 "process": { 00:12:18.798 "type": "rebuild", 00:12:18.798 "target": "spare", 00:12:18.798 "progress": { 00:12:18.798 "blocks": 22528, 00:12:18.798 "percent": 34 00:12:18.798 } 00:12:18.798 }, 00:12:18.798 "base_bdevs_list": [ 00:12:18.798 { 00:12:18.798 "name": "spare", 00:12:18.798 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:18.798 "is_configured": true, 00:12:18.798 "data_offset": 0, 00:12:18.798 "data_size": 65536 00:12:18.798 }, 00:12:18.798 { 00:12:18.798 "name": "BaseBdev2", 00:12:18.798 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:18.798 "is_configured": true, 00:12:18.798 "data_offset": 0, 00:12:18.798 "data_size": 65536 00:12:18.798 } 00:12:18.798 ] 00:12:18.798 }' 00:12:18.798 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.798 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.798 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.057 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.057 08:49:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.995 "name": "raid_bdev1", 00:12:19.995 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:19.995 "strip_size_kb": 0, 00:12:19.995 "state": "online", 00:12:19.995 "raid_level": "raid1", 00:12:19.995 "superblock": false, 00:12:19.995 "num_base_bdevs": 2, 00:12:19.995 "num_base_bdevs_discovered": 2, 00:12:19.995 "num_base_bdevs_operational": 2, 00:12:19.995 "process": { 00:12:19.995 "type": "rebuild", 00:12:19.995 "target": "spare", 00:12:19.995 "progress": { 00:12:19.995 "blocks": 45056, 00:12:19.995 "percent": 68 00:12:19.995 } 00:12:19.995 }, 00:12:19.995 "base_bdevs_list": [ 00:12:19.995 { 00:12:19.995 "name": "spare", 00:12:19.995 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:19.995 "is_configured": true, 00:12:19.995 "data_offset": 0, 00:12:19.995 "data_size": 65536 00:12:19.995 }, 00:12:19.995 { 00:12:19.995 "name": "BaseBdev2", 00:12:19.995 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:19.995 "is_configured": true, 00:12:19.995 "data_offset": 0, 00:12:19.995 "data_size": 65536 00:12:19.995 } 00:12:19.995 ] 00:12:19.995 }' 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.995 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.254 08:49:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.823 [2024-09-28 08:49:58.781158] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:20.823 [2024-09-28 08:49:58.781338] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:20.823 [2024-09-28 08:49:58.781415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.082 08:49:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.082 "name": "raid_bdev1", 00:12:21.082 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:21.082 "strip_size_kb": 0, 00:12:21.082 "state": "online", 00:12:21.082 "raid_level": "raid1", 00:12:21.082 "superblock": false, 00:12:21.082 "num_base_bdevs": 2, 00:12:21.082 "num_base_bdevs_discovered": 2, 00:12:21.082 "num_base_bdevs_operational": 2, 00:12:21.082 "base_bdevs_list": [ 00:12:21.082 { 00:12:21.082 "name": "spare", 00:12:21.082 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:21.082 "is_configured": true, 00:12:21.082 "data_offset": 0, 00:12:21.082 "data_size": 65536 00:12:21.082 }, 00:12:21.082 { 00:12:21.082 "name": "BaseBdev2", 00:12:21.082 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:21.082 "is_configured": true, 00:12:21.082 "data_offset": 0, 00:12:21.082 "data_size": 65536 00:12:21.082 } 00:12:21.082 ] 00:12:21.082 }' 00:12:21.082 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.342 "name": "raid_bdev1", 00:12:21.342 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:21.342 "strip_size_kb": 0, 00:12:21.342 "state": "online", 00:12:21.342 "raid_level": "raid1", 00:12:21.342 "superblock": false, 00:12:21.342 "num_base_bdevs": 2, 00:12:21.342 "num_base_bdevs_discovered": 2, 00:12:21.342 "num_base_bdevs_operational": 2, 00:12:21.342 "base_bdevs_list": [ 00:12:21.342 { 00:12:21.342 "name": "spare", 00:12:21.342 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:21.342 "is_configured": true, 00:12:21.342 "data_offset": 0, 00:12:21.342 "data_size": 65536 00:12:21.342 }, 00:12:21.342 { 00:12:21.342 "name": "BaseBdev2", 00:12:21.342 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:21.342 "is_configured": true, 00:12:21.342 "data_offset": 0, 00:12:21.342 "data_size": 65536 00:12:21.342 } 00:12:21.342 ] 00:12:21.342 }' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.342 "name": "raid_bdev1", 00:12:21.342 "uuid": "4eb8a5b7-3f87-4862-aacd-3be4a9d182fb", 00:12:21.342 "strip_size_kb": 0, 00:12:21.342 "state": "online", 00:12:21.342 "raid_level": "raid1", 00:12:21.342 "superblock": false, 00:12:21.342 "num_base_bdevs": 2, 00:12:21.342 "num_base_bdevs_discovered": 2, 00:12:21.342 "num_base_bdevs_operational": 2, 00:12:21.342 "base_bdevs_list": [ 00:12:21.342 { 00:12:21.342 "name": "spare", 00:12:21.342 "uuid": "a2efbecf-1efb-5126-acf8-660131c7d873", 00:12:21.342 "is_configured": true, 00:12:21.342 "data_offset": 0, 00:12:21.342 "data_size": 65536 00:12:21.342 }, 00:12:21.342 { 00:12:21.342 "name": "BaseBdev2", 00:12:21.342 "uuid": "2254edb8-6234-56a5-9fa7-67e3f7c63fd7", 00:12:21.342 "is_configured": true, 00:12:21.342 "data_offset": 0, 00:12:21.342 "data_size": 65536 00:12:21.342 } 00:12:21.342 ] 00:12:21.342 }' 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.342 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.911 [2024-09-28 08:49:59.657525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.911 [2024-09-28 08:49:59.657558] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.911 [2024-09-28 08:49:59.657649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.911 [2024-09-28 08:49:59.657734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.911 [2024-09-28 08:49:59.657744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:21.911 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:21.912 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:21.912 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:21.912 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:21.912 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:22.172 /dev/nbd0 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.172 1+0 records in 00:12:22.172 1+0 records out 00:12:22.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599602 s, 6.8 MB/s 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.172 08:49:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:22.432 /dev/nbd1 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.432 1+0 records in 00:12:22.432 1+0 records out 00:12:22.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269947 s, 15.2 MB/s 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.432 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.692 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75317 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75317 ']' 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75317 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75317 00:12:22.951 killing process with pid 75317 00:12:22.951 Received shutdown signal, test time was about 60.000000 seconds 00:12:22.951 00:12:22.951 Latency(us) 00:12:22.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.951 =================================================================================================================== 00:12:22.951 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75317' 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75317 00:12:22.951 [2024-09-28 08:50:00.875695] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.951 08:50:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75317 00:12:23.211 [2024-09-28 08:50:01.192156] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.589 08:50:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:24.589 00:12:24.589 real 0m15.018s 00:12:24.589 user 0m17.006s 00:12:24.589 sys 0m2.986s 00:12:24.589 08:50:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.589 ************************************ 00:12:24.589 END TEST raid_rebuild_test 00:12:24.589 ************************************ 00:12:24.589 08:50:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.589 08:50:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:24.589 08:50:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:24.589 08:50:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.589 08:50:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.848 ************************************ 00:12:24.848 START TEST raid_rebuild_test_sb 00:12:24.848 ************************************ 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75735 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75735 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75735 ']' 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.848 08:50:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.848 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:24.848 Zero copy mechanism will not be used. 00:12:24.848 [2024-09-28 08:50:02.699086] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:24.849 [2024-09-28 08:50:02.699342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75735 ] 00:12:25.108 [2024-09-28 08:50:02.868957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.367 [2024-09-28 08:50:03.107454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.367 [2024-09-28 08:50:03.339851] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.367 [2024-09-28 08:50:03.339970] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.626 BaseBdev1_malloc 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.626 [2024-09-28 08:50:03.564508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:25.626 [2024-09-28 08:50:03.564625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.626 [2024-09-28 08:50:03.564675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.626 [2024-09-28 08:50:03.564693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.626 [2024-09-28 08:50:03.567083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.626 [2024-09-28 08:50:03.567119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.626 BaseBdev1 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.626 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.886 BaseBdev2_malloc 00:12:25.886 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.886 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:25.886 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.886 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.886 [2024-09-28 08:50:03.637006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:25.886 [2024-09-28 08:50:03.637074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.887 [2024-09-28 08:50:03.637094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.887 [2024-09-28 08:50:03.637105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.887 [2024-09-28 08:50:03.639469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.887 [2024-09-28 08:50:03.639508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.887 BaseBdev2 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 spare_malloc 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 spare_delay 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 [2024-09-28 08:50:03.709752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.887 [2024-09-28 08:50:03.709809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.887 [2024-09-28 08:50:03.709827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:25.887 [2024-09-28 08:50:03.709837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.887 [2024-09-28 08:50:03.712177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.887 [2024-09-28 08:50:03.712218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.887 spare 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 [2024-09-28 08:50:03.721795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.887 [2024-09-28 08:50:03.723891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.887 [2024-09-28 08:50:03.724058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.887 [2024-09-28 08:50:03.724074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.887 [2024-09-28 08:50:03.724329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:25.887 [2024-09-28 08:50:03.724498] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.887 [2024-09-28 08:50:03.724507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.887 [2024-09-28 08:50:03.724656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.887 "name": "raid_bdev1", 00:12:25.887 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:25.887 "strip_size_kb": 0, 00:12:25.887 "state": "online", 00:12:25.887 "raid_level": "raid1", 00:12:25.887 "superblock": true, 00:12:25.887 "num_base_bdevs": 2, 00:12:25.887 "num_base_bdevs_discovered": 2, 00:12:25.887 "num_base_bdevs_operational": 2, 00:12:25.887 "base_bdevs_list": [ 00:12:25.887 { 00:12:25.887 "name": "BaseBdev1", 00:12:25.887 "uuid": "84d5924c-db24-501d-81d7-165d93ee4fac", 00:12:25.887 "is_configured": true, 00:12:25.887 "data_offset": 2048, 00:12:25.887 "data_size": 63488 00:12:25.887 }, 00:12:25.887 { 00:12:25.887 "name": "BaseBdev2", 00:12:25.887 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:25.887 "is_configured": true, 00:12:25.887 "data_offset": 2048, 00:12:25.887 "data_size": 63488 00:12:25.887 } 00:12:25.887 ] 00:12:25.887 }' 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.887 08:50:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.456 [2024-09-28 08:50:04.177235] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.456 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:26.456 [2024-09-28 08:50:04.424662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:26.456 /dev/nbd0 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.715 1+0 records in 00:12:26.715 1+0 records out 00:12:26.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452204 s, 9.1 MB/s 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:26.715 08:50:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:30.907 63488+0 records in 00:12:30.907 63488+0 records out 00:12:30.907 32505856 bytes (33 MB, 31 MiB) copied, 3.50611 s, 9.3 MB/s 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.907 [2024-09-28 08:50:08.229793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.907 [2024-09-28 08:50:08.245867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.907 "name": "raid_bdev1", 00:12:30.907 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:30.907 "strip_size_kb": 0, 00:12:30.907 "state": "online", 00:12:30.907 "raid_level": "raid1", 00:12:30.907 "superblock": true, 00:12:30.907 "num_base_bdevs": 2, 00:12:30.907 "num_base_bdevs_discovered": 1, 00:12:30.907 "num_base_bdevs_operational": 1, 00:12:30.907 "base_bdevs_list": [ 00:12:30.907 { 00:12:30.907 "name": null, 00:12:30.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.907 "is_configured": false, 00:12:30.907 "data_offset": 0, 00:12:30.907 "data_size": 63488 00:12:30.907 }, 00:12:30.907 { 00:12:30.907 "name": "BaseBdev2", 00:12:30.907 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:30.907 "is_configured": true, 00:12:30.907 "data_offset": 2048, 00:12:30.907 "data_size": 63488 00:12:30.907 } 00:12:30.907 ] 00:12:30.907 }' 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.907 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.908 [2024-09-28 08:50:08.693092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.908 [2024-09-28 08:50:08.709864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:30.908 08:50:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.908 08:50:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:30.908 [2024-09-28 08:50:08.712038] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.848 "name": "raid_bdev1", 00:12:31.848 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:31.848 "strip_size_kb": 0, 00:12:31.848 "state": "online", 00:12:31.848 "raid_level": "raid1", 00:12:31.848 "superblock": true, 00:12:31.848 "num_base_bdevs": 2, 00:12:31.848 "num_base_bdevs_discovered": 2, 00:12:31.848 "num_base_bdevs_operational": 2, 00:12:31.848 "process": { 00:12:31.848 "type": "rebuild", 00:12:31.848 "target": "spare", 00:12:31.848 "progress": { 00:12:31.848 "blocks": 20480, 00:12:31.848 "percent": 32 00:12:31.848 } 00:12:31.848 }, 00:12:31.848 "base_bdevs_list": [ 00:12:31.848 { 00:12:31.848 "name": "spare", 00:12:31.848 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:31.848 "is_configured": true, 00:12:31.848 "data_offset": 2048, 00:12:31.848 "data_size": 63488 00:12:31.848 }, 00:12:31.848 { 00:12:31.848 "name": "BaseBdev2", 00:12:31.848 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:31.848 "is_configured": true, 00:12:31.848 "data_offset": 2048, 00:12:31.848 "data_size": 63488 00:12:31.848 } 00:12:31.848 ] 00:12:31.848 }' 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.848 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.112 [2024-09-28 08:50:09.871287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.112 [2024-09-28 08:50:09.920778] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:32.112 [2024-09-28 08:50:09.920840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.112 [2024-09-28 08:50:09.920856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.112 [2024-09-28 08:50:09.920867] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.112 08:50:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.112 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.112 "name": "raid_bdev1", 00:12:32.112 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:32.112 "strip_size_kb": 0, 00:12:32.112 "state": "online", 00:12:32.112 "raid_level": "raid1", 00:12:32.112 "superblock": true, 00:12:32.112 "num_base_bdevs": 2, 00:12:32.112 "num_base_bdevs_discovered": 1, 00:12:32.112 "num_base_bdevs_operational": 1, 00:12:32.112 "base_bdevs_list": [ 00:12:32.112 { 00:12:32.112 "name": null, 00:12:32.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.112 "is_configured": false, 00:12:32.112 "data_offset": 0, 00:12:32.112 "data_size": 63488 00:12:32.112 }, 00:12:32.112 { 00:12:32.112 "name": "BaseBdev2", 00:12:32.112 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:32.112 "is_configured": true, 00:12:32.112 "data_offset": 2048, 00:12:32.112 "data_size": 63488 00:12:32.112 } 00:12:32.112 ] 00:12:32.112 }' 00:12:32.112 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.112 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.681 "name": "raid_bdev1", 00:12:32.681 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:32.681 "strip_size_kb": 0, 00:12:32.681 "state": "online", 00:12:32.681 "raid_level": "raid1", 00:12:32.681 "superblock": true, 00:12:32.681 "num_base_bdevs": 2, 00:12:32.681 "num_base_bdevs_discovered": 1, 00:12:32.681 "num_base_bdevs_operational": 1, 00:12:32.681 "base_bdevs_list": [ 00:12:32.681 { 00:12:32.681 "name": null, 00:12:32.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.681 "is_configured": false, 00:12:32.681 "data_offset": 0, 00:12:32.681 "data_size": 63488 00:12:32.681 }, 00:12:32.681 { 00:12:32.681 "name": "BaseBdev2", 00:12:32.681 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:32.681 "is_configured": true, 00:12:32.681 "data_offset": 2048, 00:12:32.681 "data_size": 63488 00:12:32.681 } 00:12:32.681 ] 00:12:32.681 }' 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.681 [2024-09-28 08:50:10.502809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.681 [2024-09-28 08:50:10.518981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.681 08:50:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:32.681 [2024-09-28 08:50:10.521073] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.620 "name": "raid_bdev1", 00:12:33.620 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:33.620 "strip_size_kb": 0, 00:12:33.620 "state": "online", 00:12:33.620 "raid_level": "raid1", 00:12:33.620 "superblock": true, 00:12:33.620 "num_base_bdevs": 2, 00:12:33.620 "num_base_bdevs_discovered": 2, 00:12:33.620 "num_base_bdevs_operational": 2, 00:12:33.620 "process": { 00:12:33.620 "type": "rebuild", 00:12:33.620 "target": "spare", 00:12:33.620 "progress": { 00:12:33.620 "blocks": 20480, 00:12:33.620 "percent": 32 00:12:33.620 } 00:12:33.620 }, 00:12:33.620 "base_bdevs_list": [ 00:12:33.620 { 00:12:33.620 "name": "spare", 00:12:33.620 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:33.620 "is_configured": true, 00:12:33.620 "data_offset": 2048, 00:12:33.620 "data_size": 63488 00:12:33.620 }, 00:12:33.620 { 00:12:33.620 "name": "BaseBdev2", 00:12:33.620 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:33.620 "is_configured": true, 00:12:33.620 "data_offset": 2048, 00:12:33.620 "data_size": 63488 00:12:33.620 } 00:12:33.620 ] 00:12:33.620 }' 00:12:33.620 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:33.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.880 "name": "raid_bdev1", 00:12:33.880 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:33.880 "strip_size_kb": 0, 00:12:33.880 "state": "online", 00:12:33.880 "raid_level": "raid1", 00:12:33.880 "superblock": true, 00:12:33.880 "num_base_bdevs": 2, 00:12:33.880 "num_base_bdevs_discovered": 2, 00:12:33.880 "num_base_bdevs_operational": 2, 00:12:33.880 "process": { 00:12:33.880 "type": "rebuild", 00:12:33.880 "target": "spare", 00:12:33.880 "progress": { 00:12:33.880 "blocks": 22528, 00:12:33.880 "percent": 35 00:12:33.880 } 00:12:33.880 }, 00:12:33.880 "base_bdevs_list": [ 00:12:33.880 { 00:12:33.880 "name": "spare", 00:12:33.880 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:33.880 "is_configured": true, 00:12:33.880 "data_offset": 2048, 00:12:33.880 "data_size": 63488 00:12:33.880 }, 00:12:33.880 { 00:12:33.880 "name": "BaseBdev2", 00:12:33.880 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:33.880 "is_configured": true, 00:12:33.880 "data_offset": 2048, 00:12:33.880 "data_size": 63488 00:12:33.880 } 00:12:33.880 ] 00:12:33.880 }' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.880 08:50:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.818 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.078 "name": "raid_bdev1", 00:12:35.078 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:35.078 "strip_size_kb": 0, 00:12:35.078 "state": "online", 00:12:35.078 "raid_level": "raid1", 00:12:35.078 "superblock": true, 00:12:35.078 "num_base_bdevs": 2, 00:12:35.078 "num_base_bdevs_discovered": 2, 00:12:35.078 "num_base_bdevs_operational": 2, 00:12:35.078 "process": { 00:12:35.078 "type": "rebuild", 00:12:35.078 "target": "spare", 00:12:35.078 "progress": { 00:12:35.078 "blocks": 45056, 00:12:35.078 "percent": 70 00:12:35.078 } 00:12:35.078 }, 00:12:35.078 "base_bdevs_list": [ 00:12:35.078 { 00:12:35.078 "name": "spare", 00:12:35.078 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:35.078 "is_configured": true, 00:12:35.078 "data_offset": 2048, 00:12:35.078 "data_size": 63488 00:12:35.078 }, 00:12:35.078 { 00:12:35.078 "name": "BaseBdev2", 00:12:35.078 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:35.078 "is_configured": true, 00:12:35.078 "data_offset": 2048, 00:12:35.078 "data_size": 63488 00:12:35.078 } 00:12:35.078 ] 00:12:35.078 }' 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.078 08:50:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.016 [2024-09-28 08:50:13.642831] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:36.016 [2024-09-28 08:50:13.642935] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:36.016 [2024-09-28 08:50:13.643044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.016 "name": "raid_bdev1", 00:12:36.016 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:36.016 "strip_size_kb": 0, 00:12:36.016 "state": "online", 00:12:36.016 "raid_level": "raid1", 00:12:36.016 "superblock": true, 00:12:36.016 "num_base_bdevs": 2, 00:12:36.016 "num_base_bdevs_discovered": 2, 00:12:36.016 "num_base_bdevs_operational": 2, 00:12:36.016 "base_bdevs_list": [ 00:12:36.016 { 00:12:36.016 "name": "spare", 00:12:36.016 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:36.016 "is_configured": true, 00:12:36.016 "data_offset": 2048, 00:12:36.016 "data_size": 63488 00:12:36.016 }, 00:12:36.016 { 00:12:36.016 "name": "BaseBdev2", 00:12:36.016 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:36.016 "is_configured": true, 00:12:36.016 "data_offset": 2048, 00:12:36.016 "data_size": 63488 00:12:36.016 } 00:12:36.016 ] 00:12:36.016 }' 00:12:36.016 08:50:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.276 "name": "raid_bdev1", 00:12:36.276 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:36.276 "strip_size_kb": 0, 00:12:36.276 "state": "online", 00:12:36.276 "raid_level": "raid1", 00:12:36.276 "superblock": true, 00:12:36.276 "num_base_bdevs": 2, 00:12:36.276 "num_base_bdevs_discovered": 2, 00:12:36.276 "num_base_bdevs_operational": 2, 00:12:36.276 "base_bdevs_list": [ 00:12:36.276 { 00:12:36.276 "name": "spare", 00:12:36.276 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:36.276 "is_configured": true, 00:12:36.276 "data_offset": 2048, 00:12:36.276 "data_size": 63488 00:12:36.276 }, 00:12:36.276 { 00:12:36.276 "name": "BaseBdev2", 00:12:36.276 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:36.276 "is_configured": true, 00:12:36.276 "data_offset": 2048, 00:12:36.276 "data_size": 63488 00:12:36.276 } 00:12:36.276 ] 00:12:36.276 }' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.276 "name": "raid_bdev1", 00:12:36.276 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:36.276 "strip_size_kb": 0, 00:12:36.276 "state": "online", 00:12:36.276 "raid_level": "raid1", 00:12:36.276 "superblock": true, 00:12:36.276 "num_base_bdevs": 2, 00:12:36.276 "num_base_bdevs_discovered": 2, 00:12:36.276 "num_base_bdevs_operational": 2, 00:12:36.276 "base_bdevs_list": [ 00:12:36.276 { 00:12:36.276 "name": "spare", 00:12:36.276 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:36.276 "is_configured": true, 00:12:36.276 "data_offset": 2048, 00:12:36.276 "data_size": 63488 00:12:36.276 }, 00:12:36.276 { 00:12:36.276 "name": "BaseBdev2", 00:12:36.276 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:36.276 "is_configured": true, 00:12:36.276 "data_offset": 2048, 00:12:36.276 "data_size": 63488 00:12:36.276 } 00:12:36.276 ] 00:12:36.276 }' 00:12:36.276 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.277 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.846 [2024-09-28 08:50:14.622643] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.846 [2024-09-28 08:50:14.622690] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.846 [2024-09-28 08:50:14.622784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.846 [2024-09-28 08:50:14.622874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.846 [2024-09-28 08:50:14.622886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.846 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:37.106 /dev/nbd0 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.106 1+0 records in 00:12:37.106 1+0 records out 00:12:37.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404444 s, 10.1 MB/s 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:37.106 08:50:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:37.366 /dev/nbd1 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.366 1+0 records in 00:12:37.366 1+0 records out 00:12:37.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441321 s, 9.3 MB/s 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.366 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.626 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.886 [2024-09-28 08:50:15.780442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.886 [2024-09-28 08:50:15.780501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.886 [2024-09-28 08:50:15.780526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:37.886 [2024-09-28 08:50:15.780536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.886 [2024-09-28 08:50:15.783171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.886 [2024-09-28 08:50:15.783204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.886 [2024-09-28 08:50:15.783310] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:37.886 [2024-09-28 08:50:15.783363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.886 [2024-09-28 08:50:15.783517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.886 spare 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.886 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.145 [2024-09-28 08:50:15.883419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:38.145 [2024-09-28 08:50:15.883455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.145 [2024-09-28 08:50:15.883805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:38.145 [2024-09-28 08:50:15.884018] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:38.145 [2024-09-28 08:50:15.884037] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:38.145 [2024-09-28 08:50:15.884259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.145 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.145 "name": "raid_bdev1", 00:12:38.145 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:38.145 "strip_size_kb": 0, 00:12:38.145 "state": "online", 00:12:38.145 "raid_level": "raid1", 00:12:38.145 "superblock": true, 00:12:38.145 "num_base_bdevs": 2, 00:12:38.145 "num_base_bdevs_discovered": 2, 00:12:38.146 "num_base_bdevs_operational": 2, 00:12:38.146 "base_bdevs_list": [ 00:12:38.146 { 00:12:38.146 "name": "spare", 00:12:38.146 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:38.146 "is_configured": true, 00:12:38.146 "data_offset": 2048, 00:12:38.146 "data_size": 63488 00:12:38.146 }, 00:12:38.146 { 00:12:38.146 "name": "BaseBdev2", 00:12:38.146 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:38.146 "is_configured": true, 00:12:38.146 "data_offset": 2048, 00:12:38.146 "data_size": 63488 00:12:38.146 } 00:12:38.146 ] 00:12:38.146 }' 00:12:38.146 08:50:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.146 08:50:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.405 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.405 "name": "raid_bdev1", 00:12:38.405 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:38.405 "strip_size_kb": 0, 00:12:38.405 "state": "online", 00:12:38.405 "raid_level": "raid1", 00:12:38.405 "superblock": true, 00:12:38.405 "num_base_bdevs": 2, 00:12:38.405 "num_base_bdevs_discovered": 2, 00:12:38.405 "num_base_bdevs_operational": 2, 00:12:38.406 "base_bdevs_list": [ 00:12:38.406 { 00:12:38.406 "name": "spare", 00:12:38.406 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:38.406 "is_configured": true, 00:12:38.406 "data_offset": 2048, 00:12:38.406 "data_size": 63488 00:12:38.406 }, 00:12:38.406 { 00:12:38.406 "name": "BaseBdev2", 00:12:38.406 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:38.406 "is_configured": true, 00:12:38.406 "data_offset": 2048, 00:12:38.406 "data_size": 63488 00:12:38.406 } 00:12:38.406 ] 00:12:38.406 }' 00:12:38.406 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.665 [2024-09-28 08:50:16.503272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.665 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.665 "name": "raid_bdev1", 00:12:38.665 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:38.665 "strip_size_kb": 0, 00:12:38.665 "state": "online", 00:12:38.665 "raid_level": "raid1", 00:12:38.665 "superblock": true, 00:12:38.665 "num_base_bdevs": 2, 00:12:38.665 "num_base_bdevs_discovered": 1, 00:12:38.665 "num_base_bdevs_operational": 1, 00:12:38.665 "base_bdevs_list": [ 00:12:38.665 { 00:12:38.665 "name": null, 00:12:38.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.665 "is_configured": false, 00:12:38.665 "data_offset": 0, 00:12:38.665 "data_size": 63488 00:12:38.665 }, 00:12:38.665 { 00:12:38.665 "name": "BaseBdev2", 00:12:38.665 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:38.665 "is_configured": true, 00:12:38.665 "data_offset": 2048, 00:12:38.665 "data_size": 63488 00:12:38.666 } 00:12:38.666 ] 00:12:38.666 }' 00:12:38.666 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.666 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.233 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.233 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.233 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.233 [2024-09-28 08:50:16.962595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.233 [2024-09-28 08:50:16.962840] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:39.233 [2024-09-28 08:50:16.962864] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:39.234 [2024-09-28 08:50:16.962898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.234 [2024-09-28 08:50:16.979033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:39.234 08:50:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.234 08:50:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:39.234 [2024-09-28 08:50:16.981171] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.172 08:50:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.172 "name": "raid_bdev1", 00:12:40.172 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:40.172 "strip_size_kb": 0, 00:12:40.172 "state": "online", 00:12:40.172 "raid_level": "raid1", 00:12:40.172 "superblock": true, 00:12:40.172 "num_base_bdevs": 2, 00:12:40.172 "num_base_bdevs_discovered": 2, 00:12:40.172 "num_base_bdevs_operational": 2, 00:12:40.172 "process": { 00:12:40.172 "type": "rebuild", 00:12:40.172 "target": "spare", 00:12:40.172 "progress": { 00:12:40.172 "blocks": 20480, 00:12:40.172 "percent": 32 00:12:40.172 } 00:12:40.172 }, 00:12:40.172 "base_bdevs_list": [ 00:12:40.172 { 00:12:40.172 "name": "spare", 00:12:40.172 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:40.172 "is_configured": true, 00:12:40.172 "data_offset": 2048, 00:12:40.172 "data_size": 63488 00:12:40.172 }, 00:12:40.172 { 00:12:40.172 "name": "BaseBdev2", 00:12:40.172 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:40.172 "is_configured": true, 00:12:40.172 "data_offset": 2048, 00:12:40.172 "data_size": 63488 00:12:40.172 } 00:12:40.172 ] 00:12:40.172 }' 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.172 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.172 [2024-09-28 08:50:18.116086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.432 [2024-09-28 08:50:18.189571] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.432 [2024-09-28 08:50:18.189632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.432 [2024-09-28 08:50:18.189646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.432 [2024-09-28 08:50:18.189664] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.432 "name": "raid_bdev1", 00:12:40.432 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:40.432 "strip_size_kb": 0, 00:12:40.432 "state": "online", 00:12:40.432 "raid_level": "raid1", 00:12:40.432 "superblock": true, 00:12:40.432 "num_base_bdevs": 2, 00:12:40.432 "num_base_bdevs_discovered": 1, 00:12:40.432 "num_base_bdevs_operational": 1, 00:12:40.432 "base_bdevs_list": [ 00:12:40.432 { 00:12:40.432 "name": null, 00:12:40.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.432 "is_configured": false, 00:12:40.432 "data_offset": 0, 00:12:40.432 "data_size": 63488 00:12:40.432 }, 00:12:40.432 { 00:12:40.432 "name": "BaseBdev2", 00:12:40.432 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:40.432 "is_configured": true, 00:12:40.432 "data_offset": 2048, 00:12:40.432 "data_size": 63488 00:12:40.432 } 00:12:40.432 ] 00:12:40.432 }' 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.432 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.691 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.691 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.691 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.692 [2024-09-28 08:50:18.639645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.692 [2024-09-28 08:50:18.639720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.692 [2024-09-28 08:50:18.639761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:40.692 [2024-09-28 08:50:18.639775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.692 [2024-09-28 08:50:18.640343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.692 [2024-09-28 08:50:18.640377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.692 [2024-09-28 08:50:18.640474] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:40.692 [2024-09-28 08:50:18.640496] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:40.692 [2024-09-28 08:50:18.640507] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:40.692 [2024-09-28 08:50:18.640545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.692 [2024-09-28 08:50:18.656510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:40.692 spare 00:12:40.692 08:50:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.692 08:50:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:40.692 [2024-09-28 08:50:18.658737] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.072 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.073 "name": "raid_bdev1", 00:12:42.073 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:42.073 "strip_size_kb": 0, 00:12:42.073 "state": "online", 00:12:42.073 "raid_level": "raid1", 00:12:42.073 "superblock": true, 00:12:42.073 "num_base_bdevs": 2, 00:12:42.073 "num_base_bdevs_discovered": 2, 00:12:42.073 "num_base_bdevs_operational": 2, 00:12:42.073 "process": { 00:12:42.073 "type": "rebuild", 00:12:42.073 "target": "spare", 00:12:42.073 "progress": { 00:12:42.073 "blocks": 20480, 00:12:42.073 "percent": 32 00:12:42.073 } 00:12:42.073 }, 00:12:42.073 "base_bdevs_list": [ 00:12:42.073 { 00:12:42.073 "name": "spare", 00:12:42.073 "uuid": "c4bfa2c4-1a53-501e-a09f-d871139de6bb", 00:12:42.073 "is_configured": true, 00:12:42.073 "data_offset": 2048, 00:12:42.073 "data_size": 63488 00:12:42.073 }, 00:12:42.073 { 00:12:42.073 "name": "BaseBdev2", 00:12:42.073 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:42.073 "is_configured": true, 00:12:42.073 "data_offset": 2048, 00:12:42.073 "data_size": 63488 00:12:42.073 } 00:12:42.073 ] 00:12:42.073 }' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.073 [2024-09-28 08:50:19.797835] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.073 [2024-09-28 08:50:19.867339] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:42.073 [2024-09-28 08:50:19.867396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.073 [2024-09-28 08:50:19.867414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:42.073 [2024-09-28 08:50:19.867421] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.073 "name": "raid_bdev1", 00:12:42.073 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:42.073 "strip_size_kb": 0, 00:12:42.073 "state": "online", 00:12:42.073 "raid_level": "raid1", 00:12:42.073 "superblock": true, 00:12:42.073 "num_base_bdevs": 2, 00:12:42.073 "num_base_bdevs_discovered": 1, 00:12:42.073 "num_base_bdevs_operational": 1, 00:12:42.073 "base_bdevs_list": [ 00:12:42.073 { 00:12:42.073 "name": null, 00:12:42.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.073 "is_configured": false, 00:12:42.073 "data_offset": 0, 00:12:42.073 "data_size": 63488 00:12:42.073 }, 00:12:42.073 { 00:12:42.073 "name": "BaseBdev2", 00:12:42.073 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:42.073 "is_configured": true, 00:12:42.073 "data_offset": 2048, 00:12:42.073 "data_size": 63488 00:12:42.073 } 00:12:42.073 ] 00:12:42.073 }' 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.073 08:50:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.332 "name": "raid_bdev1", 00:12:42.332 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:42.332 "strip_size_kb": 0, 00:12:42.332 "state": "online", 00:12:42.332 "raid_level": "raid1", 00:12:42.332 "superblock": true, 00:12:42.332 "num_base_bdevs": 2, 00:12:42.332 "num_base_bdevs_discovered": 1, 00:12:42.332 "num_base_bdevs_operational": 1, 00:12:42.332 "base_bdevs_list": [ 00:12:42.332 { 00:12:42.332 "name": null, 00:12:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.332 "is_configured": false, 00:12:42.332 "data_offset": 0, 00:12:42.332 "data_size": 63488 00:12:42.332 }, 00:12:42.332 { 00:12:42.332 "name": "BaseBdev2", 00:12:42.332 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:42.332 "is_configured": true, 00:12:42.332 "data_offset": 2048, 00:12:42.332 "data_size": 63488 00:12:42.332 } 00:12:42.332 ] 00:12:42.332 }' 00:12:42.332 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.591 [2024-09-28 08:50:20.424297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.591 [2024-09-28 08:50:20.424366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.591 [2024-09-28 08:50:20.424391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:42.591 [2024-09-28 08:50:20.424400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.591 [2024-09-28 08:50:20.424927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.591 [2024-09-28 08:50:20.424946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.591 [2024-09-28 08:50:20.425039] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:42.591 [2024-09-28 08:50:20.425055] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:42.591 [2024-09-28 08:50:20.425066] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:42.591 [2024-09-28 08:50:20.425082] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:42.591 BaseBdev1 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.591 08:50:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.528 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.529 "name": "raid_bdev1", 00:12:43.529 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:43.529 "strip_size_kb": 0, 00:12:43.529 "state": "online", 00:12:43.529 "raid_level": "raid1", 00:12:43.529 "superblock": true, 00:12:43.529 "num_base_bdevs": 2, 00:12:43.529 "num_base_bdevs_discovered": 1, 00:12:43.529 "num_base_bdevs_operational": 1, 00:12:43.529 "base_bdevs_list": [ 00:12:43.529 { 00:12:43.529 "name": null, 00:12:43.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.529 "is_configured": false, 00:12:43.529 "data_offset": 0, 00:12:43.529 "data_size": 63488 00:12:43.529 }, 00:12:43.529 { 00:12:43.529 "name": "BaseBdev2", 00:12:43.529 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:43.529 "is_configured": true, 00:12:43.529 "data_offset": 2048, 00:12:43.529 "data_size": 63488 00:12:43.529 } 00:12:43.529 ] 00:12:43.529 }' 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.529 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.097 "name": "raid_bdev1", 00:12:44.097 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:44.097 "strip_size_kb": 0, 00:12:44.097 "state": "online", 00:12:44.097 "raid_level": "raid1", 00:12:44.097 "superblock": true, 00:12:44.097 "num_base_bdevs": 2, 00:12:44.097 "num_base_bdevs_discovered": 1, 00:12:44.097 "num_base_bdevs_operational": 1, 00:12:44.097 "base_bdevs_list": [ 00:12:44.097 { 00:12:44.097 "name": null, 00:12:44.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.097 "is_configured": false, 00:12:44.097 "data_offset": 0, 00:12:44.097 "data_size": 63488 00:12:44.097 }, 00:12:44.097 { 00:12:44.097 "name": "BaseBdev2", 00:12:44.097 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:44.097 "is_configured": true, 00:12:44.097 "data_offset": 2048, 00:12:44.097 "data_size": 63488 00:12:44.097 } 00:12:44.097 ] 00:12:44.097 }' 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.097 08:50:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.097 [2024-09-28 08:50:22.001697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.097 [2024-09-28 08:50:22.001904] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:44.097 [2024-09-28 08:50:22.001926] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:44.097 request: 00:12:44.097 { 00:12:44.097 "base_bdev": "BaseBdev1", 00:12:44.097 "raid_bdev": "raid_bdev1", 00:12:44.097 "method": "bdev_raid_add_base_bdev", 00:12:44.097 "req_id": 1 00:12:44.097 } 00:12:44.097 Got JSON-RPC error response 00:12:44.097 response: 00:12:44.097 { 00:12:44.097 "code": -22, 00:12:44.097 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:44.097 } 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.097 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.033 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.300 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.300 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.300 "name": "raid_bdev1", 00:12:45.300 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:45.300 "strip_size_kb": 0, 00:12:45.300 "state": "online", 00:12:45.300 "raid_level": "raid1", 00:12:45.300 "superblock": true, 00:12:45.300 "num_base_bdevs": 2, 00:12:45.300 "num_base_bdevs_discovered": 1, 00:12:45.300 "num_base_bdevs_operational": 1, 00:12:45.300 "base_bdevs_list": [ 00:12:45.300 { 00:12:45.300 "name": null, 00:12:45.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.300 "is_configured": false, 00:12:45.300 "data_offset": 0, 00:12:45.300 "data_size": 63488 00:12:45.300 }, 00:12:45.300 { 00:12:45.300 "name": "BaseBdev2", 00:12:45.300 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:45.300 "is_configured": true, 00:12:45.300 "data_offset": 2048, 00:12:45.300 "data_size": 63488 00:12:45.300 } 00:12:45.300 ] 00:12:45.300 }' 00:12:45.300 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.300 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.570 "name": "raid_bdev1", 00:12:45.570 "uuid": "1e6aca7d-4eb7-4dda-acf3-6a01edeb8bc0", 00:12:45.570 "strip_size_kb": 0, 00:12:45.570 "state": "online", 00:12:45.570 "raid_level": "raid1", 00:12:45.570 "superblock": true, 00:12:45.570 "num_base_bdevs": 2, 00:12:45.570 "num_base_bdevs_discovered": 1, 00:12:45.570 "num_base_bdevs_operational": 1, 00:12:45.570 "base_bdevs_list": [ 00:12:45.570 { 00:12:45.570 "name": null, 00:12:45.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.570 "is_configured": false, 00:12:45.570 "data_offset": 0, 00:12:45.570 "data_size": 63488 00:12:45.570 }, 00:12:45.570 { 00:12:45.570 "name": "BaseBdev2", 00:12:45.570 "uuid": "e41d9417-9d60-54df-a84f-2bb0b39c4b04", 00:12:45.570 "is_configured": true, 00:12:45.570 "data_offset": 2048, 00:12:45.570 "data_size": 63488 00:12:45.570 } 00:12:45.570 ] 00:12:45.570 }' 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.570 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75735 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75735 ']' 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75735 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75735 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.830 killing process with pid 75735 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75735' 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75735 00:12:45.830 Received shutdown signal, test time was about 60.000000 seconds 00:12:45.830 00:12:45.830 Latency(us) 00:12:45.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.830 =================================================================================================================== 00:12:45.830 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:45.830 [2024-09-28 08:50:23.638135] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.830 [2024-09-28 08:50:23.638288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.830 [2024-09-28 08:50:23.638350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.830 [2024-09-28 08:50:23.638363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:45.830 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75735 00:12:46.090 [2024-09-28 08:50:23.953651] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:47.469 00:12:47.469 real 0m22.682s 00:12:47.469 user 0m27.666s 00:12:47.469 sys 0m3.558s 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.469 ************************************ 00:12:47.469 END TEST raid_rebuild_test_sb 00:12:47.469 ************************************ 00:12:47.469 08:50:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:47.469 08:50:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:47.469 08:50:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.469 08:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.469 ************************************ 00:12:47.469 START TEST raid_rebuild_test_io 00:12:47.469 ************************************ 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76454 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76454 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76454 ']' 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.469 08:50:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.469 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.469 Zero copy mechanism will not be used. 00:12:47.469 [2024-09-28 08:50:25.458611] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:47.469 [2024-09-28 08:50:25.458750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76454 ] 00:12:47.728 [2024-09-28 08:50:25.618209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.989 [2024-09-28 08:50:25.874862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.249 [2024-09-28 08:50:26.090512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.249 [2024-09-28 08:50:26.090555] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.508 BaseBdev1_malloc 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.508 [2024-09-28 08:50:26.329421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:48.508 [2024-09-28 08:50:26.329501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.508 [2024-09-28 08:50:26.329524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:48.508 [2024-09-28 08:50:26.329540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.508 [2024-09-28 08:50:26.331909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.508 [2024-09-28 08:50:26.331947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.508 BaseBdev1 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.508 BaseBdev2_malloc 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.508 [2024-09-28 08:50:26.418716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:48.508 [2024-09-28 08:50:26.418790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.508 [2024-09-28 08:50:26.418809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:48.508 [2024-09-28 08:50:26.418822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.508 [2024-09-28 08:50:26.421120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.508 [2024-09-28 08:50:26.421159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.508 BaseBdev2 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.508 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.508 spare_malloc 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.509 spare_delay 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.509 [2024-09-28 08:50:26.492013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.509 [2024-09-28 08:50:26.492130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.509 [2024-09-28 08:50:26.492169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:48.509 [2024-09-28 08:50:26.492207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.509 [2024-09-28 08:50:26.494515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.509 [2024-09-28 08:50:26.494585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.509 spare 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.509 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.768 [2024-09-28 08:50:26.504040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.768 [2024-09-28 08:50:26.506153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.768 [2024-09-28 08:50:26.506291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:48.768 [2024-09-28 08:50:26.506308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:48.768 [2024-09-28 08:50:26.506582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.768 [2024-09-28 08:50:26.506751] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:48.768 [2024-09-28 08:50:26.506762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:48.768 [2024-09-28 08:50:26.506929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.768 "name": "raid_bdev1", 00:12:48.768 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:48.768 "strip_size_kb": 0, 00:12:48.768 "state": "online", 00:12:48.768 "raid_level": "raid1", 00:12:48.768 "superblock": false, 00:12:48.768 "num_base_bdevs": 2, 00:12:48.768 "num_base_bdevs_discovered": 2, 00:12:48.768 "num_base_bdevs_operational": 2, 00:12:48.768 "base_bdevs_list": [ 00:12:48.768 { 00:12:48.768 "name": "BaseBdev1", 00:12:48.768 "uuid": "e7a9721d-e859-5d8f-849e-80426de92766", 00:12:48.768 "is_configured": true, 00:12:48.768 "data_offset": 0, 00:12:48.768 "data_size": 65536 00:12:48.768 }, 00:12:48.768 { 00:12:48.768 "name": "BaseBdev2", 00:12:48.768 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:48.768 "is_configured": true, 00:12:48.768 "data_offset": 0, 00:12:48.768 "data_size": 65536 00:12:48.768 } 00:12:48.768 ] 00:12:48.768 }' 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.768 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.027 [2024-09-28 08:50:26.919675] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.027 [2024-09-28 08:50:26.991255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.027 08:50:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.027 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.286 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.286 "name": "raid_bdev1", 00:12:49.286 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:49.286 "strip_size_kb": 0, 00:12:49.286 "state": "online", 00:12:49.286 "raid_level": "raid1", 00:12:49.286 "superblock": false, 00:12:49.286 "num_base_bdevs": 2, 00:12:49.286 "num_base_bdevs_discovered": 1, 00:12:49.286 "num_base_bdevs_operational": 1, 00:12:49.286 "base_bdevs_list": [ 00:12:49.286 { 00:12:49.286 "name": null, 00:12:49.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.286 "is_configured": false, 00:12:49.286 "data_offset": 0, 00:12:49.286 "data_size": 65536 00:12:49.286 }, 00:12:49.286 { 00:12:49.286 "name": "BaseBdev2", 00:12:49.286 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:49.286 "is_configured": true, 00:12:49.286 "data_offset": 0, 00:12:49.286 "data_size": 65536 00:12:49.286 } 00:12:49.286 ] 00:12:49.286 }' 00:12:49.286 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.286 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.286 [2024-09-28 08:50:27.092479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:49.286 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.286 Zero copy mechanism will not be used. 00:12:49.286 Running I/O for 60 seconds... 00:12:49.545 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.545 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.546 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.546 [2024-09-28 08:50:27.442534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.546 08:50:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.546 08:50:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:49.546 [2024-09-28 08:50:27.504989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:49.546 [2024-09-28 08:50:27.507219] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.805 [2024-09-28 08:50:27.629980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.805 [2024-09-28 08:50:27.630828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.064 [2024-09-28 08:50:27.852347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.064 [2024-09-28 08:50:27.852844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.323 145.00 IOPS, 435.00 MiB/s [2024-09-28 08:50:28.188007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.323 [2024-09-28 08:50:28.188821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.581 [2024-09-28 08:50:28.322175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:50.581 [2024-09-28 08:50:28.322582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.581 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.581 "name": "raid_bdev1", 00:12:50.581 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:50.581 "strip_size_kb": 0, 00:12:50.581 "state": "online", 00:12:50.581 "raid_level": "raid1", 00:12:50.581 "superblock": false, 00:12:50.581 "num_base_bdevs": 2, 00:12:50.581 "num_base_bdevs_discovered": 2, 00:12:50.581 "num_base_bdevs_operational": 2, 00:12:50.581 "process": { 00:12:50.581 "type": "rebuild", 00:12:50.581 "target": "spare", 00:12:50.581 "progress": { 00:12:50.581 "blocks": 10240, 00:12:50.581 "percent": 15 00:12:50.581 } 00:12:50.581 }, 00:12:50.581 "base_bdevs_list": [ 00:12:50.581 { 00:12:50.581 "name": "spare", 00:12:50.581 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:50.581 "is_configured": true, 00:12:50.581 "data_offset": 0, 00:12:50.581 "data_size": 65536 00:12:50.582 }, 00:12:50.582 { 00:12:50.582 "name": "BaseBdev2", 00:12:50.582 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:50.582 "is_configured": true, 00:12:50.582 "data_offset": 0, 00:12:50.582 "data_size": 65536 00:12:50.582 } 00:12:50.582 ] 00:12:50.582 }' 00:12:50.582 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.841 [2024-09-28 08:50:28.631673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.841 [2024-09-28 08:50:28.683487] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.841 [2024-09-28 08:50:28.694084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.841 [2024-09-28 08:50:28.694191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.841 [2024-09-28 08:50:28.694211] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.841 [2024-09-28 08:50:28.746121] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.841 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.842 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.842 "name": "raid_bdev1", 00:12:50.842 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:50.842 "strip_size_kb": 0, 00:12:50.842 "state": "online", 00:12:50.842 "raid_level": "raid1", 00:12:50.842 "superblock": false, 00:12:50.842 "num_base_bdevs": 2, 00:12:50.842 "num_base_bdevs_discovered": 1, 00:12:50.842 "num_base_bdevs_operational": 1, 00:12:50.842 "base_bdevs_list": [ 00:12:50.842 { 00:12:50.842 "name": null, 00:12:50.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.842 "is_configured": false, 00:12:50.842 "data_offset": 0, 00:12:50.842 "data_size": 65536 00:12:50.842 }, 00:12:50.842 { 00:12:50.842 "name": "BaseBdev2", 00:12:50.842 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:50.842 "is_configured": true, 00:12:50.842 "data_offset": 0, 00:12:50.842 "data_size": 65536 00:12:50.842 } 00:12:50.842 ] 00:12:50.842 }' 00:12:50.842 08:50:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.842 08:50:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.410 139.00 IOPS, 417.00 MiB/s 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.410 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.410 "name": "raid_bdev1", 00:12:51.410 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:51.410 "strip_size_kb": 0, 00:12:51.410 "state": "online", 00:12:51.410 "raid_level": "raid1", 00:12:51.410 "superblock": false, 00:12:51.410 "num_base_bdevs": 2, 00:12:51.410 "num_base_bdevs_discovered": 1, 00:12:51.410 "num_base_bdevs_operational": 1, 00:12:51.410 "base_bdevs_list": [ 00:12:51.410 { 00:12:51.410 "name": null, 00:12:51.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.410 "is_configured": false, 00:12:51.411 "data_offset": 0, 00:12:51.411 "data_size": 65536 00:12:51.411 }, 00:12:51.411 { 00:12:51.411 "name": "BaseBdev2", 00:12:51.411 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:51.411 "is_configured": true, 00:12:51.411 "data_offset": 0, 00:12:51.411 "data_size": 65536 00:12:51.411 } 00:12:51.411 ] 00:12:51.411 }' 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 [2024-09-28 08:50:29.359806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 08:50:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.670 [2024-09-28 08:50:29.417274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.670 [2024-09-28 08:50:29.419446] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.670 [2024-09-28 08:50:29.532381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.670 [2024-09-28 08:50:29.533274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.929 [2024-09-28 08:50:29.735577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.929 [2024-09-28 08:50:29.736071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.188 [2024-09-28 08:50:30.060546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:52.447 132.33 IOPS, 397.00 MiB/s [2024-09-28 08:50:30.278844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.447 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.707 "name": "raid_bdev1", 00:12:52.707 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:52.707 "strip_size_kb": 0, 00:12:52.707 "state": "online", 00:12:52.707 "raid_level": "raid1", 00:12:52.707 "superblock": false, 00:12:52.707 "num_base_bdevs": 2, 00:12:52.707 "num_base_bdevs_discovered": 2, 00:12:52.707 "num_base_bdevs_operational": 2, 00:12:52.707 "process": { 00:12:52.707 "type": "rebuild", 00:12:52.707 "target": "spare", 00:12:52.707 "progress": { 00:12:52.707 "blocks": 10240, 00:12:52.707 "percent": 15 00:12:52.707 } 00:12:52.707 }, 00:12:52.707 "base_bdevs_list": [ 00:12:52.707 { 00:12:52.707 "name": "spare", 00:12:52.707 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:52.707 "is_configured": true, 00:12:52.707 "data_offset": 0, 00:12:52.707 "data_size": 65536 00:12:52.707 }, 00:12:52.707 { 00:12:52.707 "name": "BaseBdev2", 00:12:52.707 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:52.707 "is_configured": true, 00:12:52.707 "data_offset": 0, 00:12:52.707 "data_size": 65536 00:12:52.707 } 00:12:52.707 ] 00:12:52.707 }' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.707 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.707 "name": "raid_bdev1", 00:12:52.707 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:52.707 "strip_size_kb": 0, 00:12:52.707 "state": "online", 00:12:52.707 "raid_level": "raid1", 00:12:52.707 "superblock": false, 00:12:52.707 "num_base_bdevs": 2, 00:12:52.707 "num_base_bdevs_discovered": 2, 00:12:52.707 "num_base_bdevs_operational": 2, 00:12:52.707 "process": { 00:12:52.707 "type": "rebuild", 00:12:52.707 "target": "spare", 00:12:52.707 "progress": { 00:12:52.707 "blocks": 12288, 00:12:52.707 "percent": 18 00:12:52.707 } 00:12:52.707 }, 00:12:52.707 "base_bdevs_list": [ 00:12:52.707 { 00:12:52.707 "name": "spare", 00:12:52.707 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:52.707 "is_configured": true, 00:12:52.707 "data_offset": 0, 00:12:52.707 "data_size": 65536 00:12:52.708 }, 00:12:52.708 { 00:12:52.708 "name": "BaseBdev2", 00:12:52.708 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:52.708 "is_configured": true, 00:12:52.708 "data_offset": 0, 00:12:52.708 "data_size": 65536 00:12:52.708 } 00:12:52.708 ] 00:12:52.708 }' 00:12:52.708 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.708 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.708 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.708 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.708 08:50:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.966 [2024-09-28 08:50:30.742466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:53.225 [2024-09-28 08:50:30.968010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:53.484 119.75 IOPS, 359.25 MiB/s [2024-09-28 08:50:31.328279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:53.484 [2024-09-28 08:50:31.329124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.744 08:50:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.003 "name": "raid_bdev1", 00:12:54.003 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:54.003 "strip_size_kb": 0, 00:12:54.003 "state": "online", 00:12:54.003 "raid_level": "raid1", 00:12:54.003 "superblock": false, 00:12:54.003 "num_base_bdevs": 2, 00:12:54.003 "num_base_bdevs_discovered": 2, 00:12:54.003 "num_base_bdevs_operational": 2, 00:12:54.003 "process": { 00:12:54.003 "type": "rebuild", 00:12:54.003 "target": "spare", 00:12:54.003 "progress": { 00:12:54.003 "blocks": 30720, 00:12:54.003 "percent": 46 00:12:54.003 } 00:12:54.003 }, 00:12:54.003 "base_bdevs_list": [ 00:12:54.003 { 00:12:54.003 "name": "spare", 00:12:54.003 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:54.003 "is_configured": true, 00:12:54.003 "data_offset": 0, 00:12:54.003 "data_size": 65536 00:12:54.003 }, 00:12:54.003 { 00:12:54.003 "name": "BaseBdev2", 00:12:54.003 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:54.003 "is_configured": true, 00:12:54.003 "data_offset": 0, 00:12:54.003 "data_size": 65536 00:12:54.003 } 00:12:54.003 ] 00:12:54.003 }' 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.003 08:50:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.263 104.20 IOPS, 312.60 MiB/s [2024-09-28 08:50:32.125108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.201 "name": "raid_bdev1", 00:12:55.201 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:55.201 "strip_size_kb": 0, 00:12:55.201 "state": "online", 00:12:55.201 "raid_level": "raid1", 00:12:55.201 "superblock": false, 00:12:55.201 "num_base_bdevs": 2, 00:12:55.201 "num_base_bdevs_discovered": 2, 00:12:55.201 "num_base_bdevs_operational": 2, 00:12:55.201 "process": { 00:12:55.201 "type": "rebuild", 00:12:55.201 "target": "spare", 00:12:55.201 "progress": { 00:12:55.201 "blocks": 51200, 00:12:55.201 "percent": 78 00:12:55.201 } 00:12:55.201 }, 00:12:55.201 "base_bdevs_list": [ 00:12:55.201 { 00:12:55.201 "name": "spare", 00:12:55.201 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:55.201 "is_configured": true, 00:12:55.201 "data_offset": 0, 00:12:55.201 "data_size": 65536 00:12:55.201 }, 00:12:55.201 { 00:12:55.201 "name": "BaseBdev2", 00:12:55.201 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:55.201 "is_configured": true, 00:12:55.201 "data_offset": 0, 00:12:55.201 "data_size": 65536 00:12:55.201 } 00:12:55.201 ] 00:12:55.201 }' 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.201 [2024-09-28 08:50:32.924231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.201 08:50:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.201 93.33 IOPS, 280.00 MiB/s [2024-09-28 08:50:33.129937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:55.201 [2024-09-28 08:50:33.130745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:55.460 [2024-09-28 08:50:33.255236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:55.720 [2024-09-28 08:50:33.695433] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:55.980 [2024-09-28 08:50:33.800320] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:55.980 [2024-09-28 08:50:33.804710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.240 08:50:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.240 08:50:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.240 "name": "raid_bdev1", 00:12:56.240 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:56.240 "strip_size_kb": 0, 00:12:56.240 "state": "online", 00:12:56.240 "raid_level": "raid1", 00:12:56.240 "superblock": false, 00:12:56.240 "num_base_bdevs": 2, 00:12:56.240 "num_base_bdevs_discovered": 2, 00:12:56.240 "num_base_bdevs_operational": 2, 00:12:56.240 "base_bdevs_list": [ 00:12:56.240 { 00:12:56.240 "name": "spare", 00:12:56.240 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:56.240 "is_configured": true, 00:12:56.240 "data_offset": 0, 00:12:56.240 "data_size": 65536 00:12:56.240 }, 00:12:56.240 { 00:12:56.240 "name": "BaseBdev2", 00:12:56.240 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:56.240 "is_configured": true, 00:12:56.240 "data_offset": 0, 00:12:56.240 "data_size": 65536 00:12:56.240 } 00:12:56.240 ] 00:12:56.240 }' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.240 84.86 IOPS, 254.57 MiB/s 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.240 "name": "raid_bdev1", 00:12:56.240 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:56.240 "strip_size_kb": 0, 00:12:56.240 "state": "online", 00:12:56.240 "raid_level": "raid1", 00:12:56.240 "superblock": false, 00:12:56.240 "num_base_bdevs": 2, 00:12:56.240 "num_base_bdevs_discovered": 2, 00:12:56.240 "num_base_bdevs_operational": 2, 00:12:56.240 "base_bdevs_list": [ 00:12:56.240 { 00:12:56.240 "name": "spare", 00:12:56.240 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:56.240 "is_configured": true, 00:12:56.240 "data_offset": 0, 00:12:56.240 "data_size": 65536 00:12:56.240 }, 00:12:56.240 { 00:12:56.240 "name": "BaseBdev2", 00:12:56.240 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:56.240 "is_configured": true, 00:12:56.240 "data_offset": 0, 00:12:56.240 "data_size": 65536 00:12:56.240 } 00:12:56.240 ] 00:12:56.240 }' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.240 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.623 "name": "raid_bdev1", 00:12:56.623 "uuid": "de227f8d-b4f4-4fe3-b468-fb37d50ef2f7", 00:12:56.623 "strip_size_kb": 0, 00:12:56.623 "state": "online", 00:12:56.623 "raid_level": "raid1", 00:12:56.623 "superblock": false, 00:12:56.623 "num_base_bdevs": 2, 00:12:56.623 "num_base_bdevs_discovered": 2, 00:12:56.623 "num_base_bdevs_operational": 2, 00:12:56.623 "base_bdevs_list": [ 00:12:56.623 { 00:12:56.623 "name": "spare", 00:12:56.623 "uuid": "50c587a6-a32d-5643-9577-085aa58b1989", 00:12:56.623 "is_configured": true, 00:12:56.623 "data_offset": 0, 00:12:56.623 "data_size": 65536 00:12:56.623 }, 00:12:56.623 { 00:12:56.623 "name": "BaseBdev2", 00:12:56.623 "uuid": "e39c69c5-7c02-5138-99ff-d79b9a5aba87", 00:12:56.623 "is_configured": true, 00:12:56.623 "data_offset": 0, 00:12:56.623 "data_size": 65536 00:12:56.623 } 00:12:56.623 ] 00:12:56.623 }' 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.623 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.899 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.899 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.899 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.899 [2024-09-28 08:50:34.724377] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.899 [2024-09-28 08:50:34.724461] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.899 00:12:56.899 Latency(us) 00:12:56.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.899 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:56.899 raid_bdev1 : 7.70 80.26 240.77 0.00 0.00 17307.33 304.07 110810.21 00:12:56.899 =================================================================================================================== 00:12:56.899 Total : 80.26 240.77 0.00 0.00 17307.33 304.07 110810.21 00:12:56.899 [2024-09-28 08:50:34.800817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.899 [2024-09-28 08:50:34.800908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.900 [2024-09-28 08:50:34.801009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.900 [2024-09-28 08:50:34.801074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.900 { 00:12:56.900 "results": [ 00:12:56.900 { 00:12:56.900 "job": "raid_bdev1", 00:12:56.900 "core_mask": "0x1", 00:12:56.900 "workload": "randrw", 00:12:56.900 "percentage": 50, 00:12:56.900 "status": "finished", 00:12:56.900 "queue_depth": 2, 00:12:56.900 "io_size": 3145728, 00:12:56.900 "runtime": 7.700244, 00:12:56.900 "iops": 80.25719704466508, 00:12:56.900 "mibps": 240.77159113399523, 00:12:56.900 "io_failed": 0, 00:12:56.900 "io_timeout": 0, 00:12:56.900 "avg_latency_us": 17307.326419920577, 00:12:56.900 "min_latency_us": 304.0698689956332, 00:12:56.900 "max_latency_us": 110810.21484716157 00:12:56.900 } 00:12:56.900 ], 00:12:56.900 "core_count": 1 00:12:56.900 } 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:56.900 08:50:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:57.160 /dev/nbd0 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.160 1+0 records in 00:12:57.160 1+0 records out 00:12:57.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398965 s, 10.3 MB/s 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.160 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:57.420 /dev/nbd1 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.420 1+0 records in 00:12:57.420 1+0 records out 00:12:57.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552875 s, 7.4 MB/s 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.420 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.680 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.940 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.199 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76454 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76454 ']' 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76454 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.200 08:50:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76454 00:12:58.200 killing process with pid 76454 00:12:58.200 Received shutdown signal, test time was about 8.925258 seconds 00:12:58.200 00:12:58.200 Latency(us) 00:12:58.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.200 =================================================================================================================== 00:12:58.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.200 08:50:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.200 08:50:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.200 08:50:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76454' 00:12:58.200 08:50:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76454 00:12:58.200 [2024-09-28 08:50:36.002820] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.200 08:50:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76454 00:12:58.460 [2024-09-28 08:50:36.245771] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.841 ************************************ 00:12:59.841 END TEST raid_rebuild_test_io 00:12:59.841 ************************************ 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:59.841 00:12:59.841 real 0m12.274s 00:12:59.841 user 0m15.224s 00:12:59.841 sys 0m1.562s 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.841 08:50:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:59.841 08:50:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:59.841 08:50:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.841 08:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.841 ************************************ 00:12:59.841 START TEST raid_rebuild_test_sb_io 00:12:59.841 ************************************ 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76830 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76830 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76830 ']' 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.841 08:50:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.841 [2024-09-28 08:50:37.814967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:59.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.841 Zero copy mechanism will not be used. 00:12:59.841 [2024-09-28 08:50:37.815212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76830 ] 00:13:00.101 [2024-09-28 08:50:37.984346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.360 [2024-09-28 08:50:38.245000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.620 [2024-09-28 08:50:38.471543] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.620 [2024-09-28 08:50:38.471679] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.880 BaseBdev1_malloc 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.880 [2024-09-28 08:50:38.683696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:00.880 [2024-09-28 08:50:38.683766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.880 [2024-09-28 08:50:38.683793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:00.880 [2024-09-28 08:50:38.683808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.880 [2024-09-28 08:50:38.686224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.880 [2024-09-28 08:50:38.686302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.880 BaseBdev1 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.880 BaseBdev2_malloc 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:00.880 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 [2024-09-28 08:50:38.753857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:00.881 [2024-09-28 08:50:38.753974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.881 [2024-09-28 08:50:38.753999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:00.881 [2024-09-28 08:50:38.754011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.881 [2024-09-28 08:50:38.756383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.881 [2024-09-28 08:50:38.756423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.881 BaseBdev2 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 spare_malloc 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 spare_delay 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 [2024-09-28 08:50:38.822451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.881 [2024-09-28 08:50:38.822525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.881 [2024-09-28 08:50:38.822544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:00.881 [2024-09-28 08:50:38.822555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.881 [2024-09-28 08:50:38.824913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.881 [2024-09-28 08:50:38.824950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.881 spare 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 [2024-09-28 08:50:38.834481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.881 [2024-09-28 08:50:38.836544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.881 [2024-09-28 08:50:38.836795] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:00.881 [2024-09-28 08:50:38.836816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.881 [2024-09-28 08:50:38.837075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:00.881 [2024-09-28 08:50:38.837258] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:00.881 [2024-09-28 08:50:38.837268] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:00.881 [2024-09-28 08:50:38.837432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.881 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.141 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.141 "name": "raid_bdev1", 00:13:01.141 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:01.141 "strip_size_kb": 0, 00:13:01.141 "state": "online", 00:13:01.141 "raid_level": "raid1", 00:13:01.141 "superblock": true, 00:13:01.141 "num_base_bdevs": 2, 00:13:01.141 "num_base_bdevs_discovered": 2, 00:13:01.141 "num_base_bdevs_operational": 2, 00:13:01.141 "base_bdevs_list": [ 00:13:01.141 { 00:13:01.141 "name": "BaseBdev1", 00:13:01.141 "uuid": "22a472fd-559e-549f-9c5c-1b44ee8e7a48", 00:13:01.141 "is_configured": true, 00:13:01.141 "data_offset": 2048, 00:13:01.141 "data_size": 63488 00:13:01.141 }, 00:13:01.141 { 00:13:01.141 "name": "BaseBdev2", 00:13:01.141 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:01.141 "is_configured": true, 00:13:01.141 "data_offset": 2048, 00:13:01.141 "data_size": 63488 00:13:01.141 } 00:13:01.141 ] 00:13:01.141 }' 00:13:01.141 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.141 08:50:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.401 [2024-09-28 08:50:39.282012] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.401 [2024-09-28 08:50:39.369517] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.401 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.660 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.660 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.660 "name": "raid_bdev1", 00:13:01.660 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:01.660 "strip_size_kb": 0, 00:13:01.660 "state": "online", 00:13:01.660 "raid_level": "raid1", 00:13:01.660 "superblock": true, 00:13:01.660 "num_base_bdevs": 2, 00:13:01.660 "num_base_bdevs_discovered": 1, 00:13:01.660 "num_base_bdevs_operational": 1, 00:13:01.660 "base_bdevs_list": [ 00:13:01.660 { 00:13:01.660 "name": null, 00:13:01.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.660 "is_configured": false, 00:13:01.660 "data_offset": 0, 00:13:01.660 "data_size": 63488 00:13:01.660 }, 00:13:01.660 { 00:13:01.660 "name": "BaseBdev2", 00:13:01.660 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:01.660 "is_configured": true, 00:13:01.660 "data_offset": 2048, 00:13:01.660 "data_size": 63488 00:13:01.660 } 00:13:01.660 ] 00:13:01.660 }' 00:13:01.660 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.660 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.660 [2024-09-28 08:50:39.470736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:01.660 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.660 Zero copy mechanism will not be used. 00:13:01.660 Running I/O for 60 seconds... 00:13:01.920 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.920 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.920 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.920 [2024-09-28 08:50:39.783070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.920 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.920 08:50:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:01.920 [2024-09-28 08:50:39.834942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:01.920 [2024-09-28 08:50:39.837209] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.180 [2024-09-28 08:50:39.950716] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:02.180 [2024-09-28 08:50:39.951543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:02.180 [2024-09-28 08:50:40.161421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.180 [2024-09-28 08:50:40.161853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.750 129.00 IOPS, 387.00 MiB/s [2024-09-28 08:50:40.495103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.750 [2024-09-28 08:50:40.495928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.750 [2024-09-28 08:50:40.710942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.750 [2024-09-28 08:50:40.711358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:03.010 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.011 "name": "raid_bdev1", 00:13:03.011 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:03.011 "strip_size_kb": 0, 00:13:03.011 "state": "online", 00:13:03.011 "raid_level": "raid1", 00:13:03.011 "superblock": true, 00:13:03.011 "num_base_bdevs": 2, 00:13:03.011 "num_base_bdevs_discovered": 2, 00:13:03.011 "num_base_bdevs_operational": 2, 00:13:03.011 "process": { 00:13:03.011 "type": "rebuild", 00:13:03.011 "target": "spare", 00:13:03.011 "progress": { 00:13:03.011 "blocks": 10240, 00:13:03.011 "percent": 16 00:13:03.011 } 00:13:03.011 }, 00:13:03.011 "base_bdevs_list": [ 00:13:03.011 { 00:13:03.011 "name": "spare", 00:13:03.011 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:03.011 "is_configured": true, 00:13:03.011 "data_offset": 2048, 00:13:03.011 "data_size": 63488 00:13:03.011 }, 00:13:03.011 { 00:13:03.011 "name": "BaseBdev2", 00:13:03.011 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:03.011 "is_configured": true, 00:13:03.011 "data_offset": 2048, 00:13:03.011 "data_size": 63488 00:13:03.011 } 00:13:03.011 ] 00:13:03.011 }' 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.011 08:50:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.011 [2024-09-28 08:50:40.988588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.271 [2024-09-28 08:50:41.151445] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.272 [2024-09-28 08:50:41.160304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.272 [2024-09-28 08:50:41.160346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.272 [2024-09-28 08:50:41.160362] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.272 [2024-09-28 08:50:41.205354] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.272 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.532 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.532 "name": "raid_bdev1", 00:13:03.532 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:03.532 "strip_size_kb": 0, 00:13:03.532 "state": "online", 00:13:03.532 "raid_level": "raid1", 00:13:03.532 "superblock": true, 00:13:03.532 "num_base_bdevs": 2, 00:13:03.532 "num_base_bdevs_discovered": 1, 00:13:03.532 "num_base_bdevs_operational": 1, 00:13:03.532 "base_bdevs_list": [ 00:13:03.532 { 00:13:03.532 "name": null, 00:13:03.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.532 "is_configured": false, 00:13:03.532 "data_offset": 0, 00:13:03.532 "data_size": 63488 00:13:03.532 }, 00:13:03.532 { 00:13:03.532 "name": "BaseBdev2", 00:13:03.532 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:03.532 "is_configured": true, 00:13:03.532 "data_offset": 2048, 00:13:03.532 "data_size": 63488 00:13:03.532 } 00:13:03.532 ] 00:13:03.532 }' 00:13:03.532 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.532 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.792 130.50 IOPS, 391.50 MiB/s 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.792 "name": "raid_bdev1", 00:13:03.792 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:03.792 "strip_size_kb": 0, 00:13:03.792 "state": "online", 00:13:03.792 "raid_level": "raid1", 00:13:03.792 "superblock": true, 00:13:03.792 "num_base_bdevs": 2, 00:13:03.792 "num_base_bdevs_discovered": 1, 00:13:03.792 "num_base_bdevs_operational": 1, 00:13:03.792 "base_bdevs_list": [ 00:13:03.792 { 00:13:03.792 "name": null, 00:13:03.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.792 "is_configured": false, 00:13:03.792 "data_offset": 0, 00:13:03.792 "data_size": 63488 00:13:03.792 }, 00:13:03.792 { 00:13:03.792 "name": "BaseBdev2", 00:13:03.792 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:03.792 "is_configured": true, 00:13:03.792 "data_offset": 2048, 00:13:03.792 "data_size": 63488 00:13:03.792 } 00:13:03.792 ] 00:13:03.792 }' 00:13:03.792 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.793 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.793 [2024-09-28 08:50:41.781049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.053 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.053 08:50:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:04.053 [2024-09-28 08:50:41.826585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:04.053 [2024-09-28 08:50:41.828912] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.053 [2024-09-28 08:50:41.949229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.053 [2024-09-28 08:50:41.950014] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.312 [2024-09-28 08:50:42.171501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.312 [2024-09-28 08:50:42.172074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.572 [2024-09-28 08:50:42.415573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.831 141.67 IOPS, 425.00 MiB/s 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.831 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.831 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.831 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.831 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.090 "name": "raid_bdev1", 00:13:05.090 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:05.090 "strip_size_kb": 0, 00:13:05.090 "state": "online", 00:13:05.090 "raid_level": "raid1", 00:13:05.090 "superblock": true, 00:13:05.090 "num_base_bdevs": 2, 00:13:05.090 "num_base_bdevs_discovered": 2, 00:13:05.090 "num_base_bdevs_operational": 2, 00:13:05.090 "process": { 00:13:05.090 "type": "rebuild", 00:13:05.090 "target": "spare", 00:13:05.090 "progress": { 00:13:05.090 "blocks": 12288, 00:13:05.090 "percent": 19 00:13:05.090 } 00:13:05.090 }, 00:13:05.090 "base_bdevs_list": [ 00:13:05.090 { 00:13:05.090 "name": "spare", 00:13:05.090 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:05.090 "is_configured": true, 00:13:05.090 "data_offset": 2048, 00:13:05.090 "data_size": 63488 00:13:05.090 }, 00:13:05.090 { 00:13:05.090 "name": "BaseBdev2", 00:13:05.090 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:05.090 "is_configured": true, 00:13:05.090 "data_offset": 2048, 00:13:05.090 "data_size": 63488 00:13:05.090 } 00:13:05.090 ] 00:13:05.090 }' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:05.090 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.090 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.091 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.091 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.091 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.091 08:50:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.091 [2024-09-28 08:50:42.996635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.091 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.091 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.091 "name": "raid_bdev1", 00:13:05.091 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:05.091 "strip_size_kb": 0, 00:13:05.091 "state": "online", 00:13:05.091 "raid_level": "raid1", 00:13:05.091 "superblock": true, 00:13:05.091 "num_base_bdevs": 2, 00:13:05.091 "num_base_bdevs_discovered": 2, 00:13:05.091 "num_base_bdevs_operational": 2, 00:13:05.091 "process": { 00:13:05.091 "type": "rebuild", 00:13:05.091 "target": "spare", 00:13:05.091 "progress": { 00:13:05.091 "blocks": 14336, 00:13:05.091 "percent": 22 00:13:05.091 } 00:13:05.091 }, 00:13:05.091 "base_bdevs_list": [ 00:13:05.091 { 00:13:05.091 "name": "spare", 00:13:05.091 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:05.091 "is_configured": true, 00:13:05.091 "data_offset": 2048, 00:13:05.091 "data_size": 63488 00:13:05.091 }, 00:13:05.091 { 00:13:05.091 "name": "BaseBdev2", 00:13:05.091 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:05.091 "is_configured": true, 00:13:05.091 "data_offset": 2048, 00:13:05.091 "data_size": 63488 00:13:05.091 } 00:13:05.091 ] 00:13:05.091 }' 00:13:05.091 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.091 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.091 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.350 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.350 08:50:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.610 [2024-09-28 08:50:43.365047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:05.610 124.25 IOPS, 372.75 MiB/s [2024-09-28 08:50:43.480938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:05.610 [2024-09-28 08:50:43.481362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:05.869 [2024-09-28 08:50:43.833276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:06.129 [2024-09-28 08:50:43.946922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:06.388 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.388 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.388 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.388 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.388 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.389 "name": "raid_bdev1", 00:13:06.389 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:06.389 "strip_size_kb": 0, 00:13:06.389 "state": "online", 00:13:06.389 "raid_level": "raid1", 00:13:06.389 "superblock": true, 00:13:06.389 "num_base_bdevs": 2, 00:13:06.389 "num_base_bdevs_discovered": 2, 00:13:06.389 "num_base_bdevs_operational": 2, 00:13:06.389 "process": { 00:13:06.389 "type": "rebuild", 00:13:06.389 "target": "spare", 00:13:06.389 "progress": { 00:13:06.389 "blocks": 30720, 00:13:06.389 "percent": 48 00:13:06.389 } 00:13:06.389 }, 00:13:06.389 "base_bdevs_list": [ 00:13:06.389 { 00:13:06.389 "name": "spare", 00:13:06.389 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:06.389 "is_configured": true, 00:13:06.389 "data_offset": 2048, 00:13:06.389 "data_size": 63488 00:13:06.389 }, 00:13:06.389 { 00:13:06.389 "name": "BaseBdev2", 00:13:06.389 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:06.389 "is_configured": true, 00:13:06.389 "data_offset": 2048, 00:13:06.389 "data_size": 63488 00:13:06.389 } 00:13:06.389 ] 00:13:06.389 }' 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.389 08:50:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.908 107.00 IOPS, 321.00 MiB/s [2024-09-28 08:50:44.817408] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:07.168 [2024-09-28 08:50:45.145368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.428 "name": "raid_bdev1", 00:13:07.428 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:07.428 "strip_size_kb": 0, 00:13:07.428 "state": "online", 00:13:07.428 "raid_level": "raid1", 00:13:07.428 "superblock": true, 00:13:07.428 "num_base_bdevs": 2, 00:13:07.428 "num_base_bdevs_discovered": 2, 00:13:07.428 "num_base_bdevs_operational": 2, 00:13:07.428 "process": { 00:13:07.428 "type": "rebuild", 00:13:07.428 "target": "spare", 00:13:07.428 "progress": { 00:13:07.428 "blocks": 51200, 00:13:07.428 "percent": 80 00:13:07.428 } 00:13:07.428 }, 00:13:07.428 "base_bdevs_list": [ 00:13:07.428 { 00:13:07.428 "name": "spare", 00:13:07.428 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:07.428 "is_configured": true, 00:13:07.428 "data_offset": 2048, 00:13:07.428 "data_size": 63488 00:13:07.428 }, 00:13:07.428 { 00:13:07.428 "name": "BaseBdev2", 00:13:07.428 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:07.428 "is_configured": true, 00:13:07.428 "data_offset": 2048, 00:13:07.428 "data_size": 63488 00:13:07.428 } 00:13:07.428 ] 00:13:07.428 }' 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.428 [2024-09-28 08:50:45.352975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.428 08:50:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.257 99.00 IOPS, 297.00 MiB/s [2024-09-28 08:50:46.001096] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.257 [2024-09-28 08:50:46.105936] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.257 [2024-09-28 08:50:46.110311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.518 "name": "raid_bdev1", 00:13:08.518 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:08.518 "strip_size_kb": 0, 00:13:08.518 "state": "online", 00:13:08.518 "raid_level": "raid1", 00:13:08.518 "superblock": true, 00:13:08.518 "num_base_bdevs": 2, 00:13:08.518 "num_base_bdevs_discovered": 2, 00:13:08.518 "num_base_bdevs_operational": 2, 00:13:08.518 "base_bdevs_list": [ 00:13:08.518 { 00:13:08.518 "name": "spare", 00:13:08.518 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:08.518 "is_configured": true, 00:13:08.518 "data_offset": 2048, 00:13:08.518 "data_size": 63488 00:13:08.518 }, 00:13:08.518 { 00:13:08.518 "name": "BaseBdev2", 00:13:08.518 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:08.518 "is_configured": true, 00:13:08.518 "data_offset": 2048, 00:13:08.518 "data_size": 63488 00:13:08.518 } 00:13:08.518 ] 00:13:08.518 }' 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.518 89.43 IOPS, 268.29 MiB/s 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.518 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.778 "name": "raid_bdev1", 00:13:08.778 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:08.778 "strip_size_kb": 0, 00:13:08.778 "state": "online", 00:13:08.778 "raid_level": "raid1", 00:13:08.778 "superblock": true, 00:13:08.778 "num_base_bdevs": 2, 00:13:08.778 "num_base_bdevs_discovered": 2, 00:13:08.778 "num_base_bdevs_operational": 2, 00:13:08.778 "base_bdevs_list": [ 00:13:08.778 { 00:13:08.778 "name": "spare", 00:13:08.778 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:08.778 "is_configured": true, 00:13:08.778 "data_offset": 2048, 00:13:08.778 "data_size": 63488 00:13:08.778 }, 00:13:08.778 { 00:13:08.778 "name": "BaseBdev2", 00:13:08.778 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:08.778 "is_configured": true, 00:13:08.778 "data_offset": 2048, 00:13:08.778 "data_size": 63488 00:13:08.778 } 00:13:08.778 ] 00:13:08.778 }' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.778 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.778 "name": "raid_bdev1", 00:13:08.778 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:08.778 "strip_size_kb": 0, 00:13:08.778 "state": "online", 00:13:08.778 "raid_level": "raid1", 00:13:08.778 "superblock": true, 00:13:08.778 "num_base_bdevs": 2, 00:13:08.778 "num_base_bdevs_discovered": 2, 00:13:08.778 "num_base_bdevs_operational": 2, 00:13:08.778 "base_bdevs_list": [ 00:13:08.778 { 00:13:08.778 "name": "spare", 00:13:08.779 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:08.779 "is_configured": true, 00:13:08.779 "data_offset": 2048, 00:13:08.779 "data_size": 63488 00:13:08.779 }, 00:13:08.779 { 00:13:08.779 "name": "BaseBdev2", 00:13:08.779 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:08.779 "is_configured": true, 00:13:08.779 "data_offset": 2048, 00:13:08.779 "data_size": 63488 00:13:08.779 } 00:13:08.779 ] 00:13:08.779 }' 00:13:08.779 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.779 08:50:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.347 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.347 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.347 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.347 [2024-09-28 08:50:47.090853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.347 [2024-09-28 08:50:47.090888] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.347 00:13:09.347 Latency(us) 00:13:09.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.347 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:09.347 raid_bdev1 : 7.73 84.58 253.74 0.00 0.00 16703.27 307.65 115389.15 00:13:09.347 =================================================================================================================== 00:13:09.347 Total : 84.58 253.74 0.00 0.00 16703.27 307.65 115389.15 00:13:09.347 [2024-09-28 08:50:47.211187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.347 [2024-09-28 08:50:47.211293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.347 [2024-09-28 08:50:47.211387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.347 [2024-09-28 08:50:47.211398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.347 { 00:13:09.347 "results": [ 00:13:09.347 { 00:13:09.347 "job": "raid_bdev1", 00:13:09.347 "core_mask": "0x1", 00:13:09.347 "workload": "randrw", 00:13:09.347 "percentage": 50, 00:13:09.347 "status": "finished", 00:13:09.347 "queue_depth": 2, 00:13:09.347 "io_size": 3145728, 00:13:09.348 "runtime": 7.732357, 00:13:09.348 "iops": 84.57964369725816, 00:13:09.348 "mibps": 253.73893109177448, 00:13:09.348 "io_failed": 0, 00:13:09.348 "io_timeout": 0, 00:13:09.348 "avg_latency_us": 16703.26789257909, 00:13:09.348 "min_latency_us": 307.6471615720524, 00:13:09.348 "max_latency_us": 115389.14934497817 00:13:09.348 } 00:13:09.348 ], 00:13:09.348 "core_count": 1 00:13:09.348 } 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.348 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:09.608 /dev/nbd0 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.608 1+0 records in 00:13:09.608 1+0 records out 00:13:09.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476053 s, 8.6 MB/s 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.608 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:09.868 /dev/nbd1 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.868 1+0 records in 00:13:09.868 1+0 records out 00:13:09.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582408 s, 7.0 MB/s 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.868 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.127 08:50:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.386 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.386 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.386 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.386 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.387 [2024-09-28 08:50:48.346517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.387 [2024-09-28 08:50:48.346579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.387 [2024-09-28 08:50:48.346622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:10.387 [2024-09-28 08:50:48.346631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.387 [2024-09-28 08:50:48.349184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.387 [2024-09-28 08:50:48.349224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.387 [2024-09-28 08:50:48.349325] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.387 [2024-09-28 08:50:48.349393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.387 [2024-09-28 08:50:48.349561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.387 spare 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.387 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.647 [2024-09-28 08:50:48.449465] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:10.647 [2024-09-28 08:50:48.449493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.647 [2024-09-28 08:50:48.449804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:10.647 [2024-09-28 08:50:48.449996] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:10.647 [2024-09-28 08:50:48.450005] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:10.647 [2024-09-28 08:50:48.450193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.647 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.647 "name": "raid_bdev1", 00:13:10.647 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:10.647 "strip_size_kb": 0, 00:13:10.647 "state": "online", 00:13:10.647 "raid_level": "raid1", 00:13:10.647 "superblock": true, 00:13:10.647 "num_base_bdevs": 2, 00:13:10.647 "num_base_bdevs_discovered": 2, 00:13:10.647 "num_base_bdevs_operational": 2, 00:13:10.647 "base_bdevs_list": [ 00:13:10.648 { 00:13:10.648 "name": "spare", 00:13:10.648 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:10.648 "is_configured": true, 00:13:10.648 "data_offset": 2048, 00:13:10.648 "data_size": 63488 00:13:10.648 }, 00:13:10.648 { 00:13:10.648 "name": "BaseBdev2", 00:13:10.648 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:10.648 "is_configured": true, 00:13:10.648 "data_offset": 2048, 00:13:10.648 "data_size": 63488 00:13:10.648 } 00:13:10.648 ] 00:13:10.648 }' 00:13:10.648 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.648 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.907 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.166 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.166 "name": "raid_bdev1", 00:13:11.166 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:11.166 "strip_size_kb": 0, 00:13:11.166 "state": "online", 00:13:11.166 "raid_level": "raid1", 00:13:11.166 "superblock": true, 00:13:11.166 "num_base_bdevs": 2, 00:13:11.166 "num_base_bdevs_discovered": 2, 00:13:11.166 "num_base_bdevs_operational": 2, 00:13:11.166 "base_bdevs_list": [ 00:13:11.166 { 00:13:11.166 "name": "spare", 00:13:11.166 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:11.166 "is_configured": true, 00:13:11.166 "data_offset": 2048, 00:13:11.166 "data_size": 63488 00:13:11.166 }, 00:13:11.166 { 00:13:11.166 "name": "BaseBdev2", 00:13:11.166 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:11.166 "is_configured": true, 00:13:11.166 "data_offset": 2048, 00:13:11.166 "data_size": 63488 00:13:11.166 } 00:13:11.166 ] 00:13:11.166 }' 00:13:11.166 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.166 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.167 08:50:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.167 [2024-09-28 08:50:49.053369] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.167 "name": "raid_bdev1", 00:13:11.167 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:11.167 "strip_size_kb": 0, 00:13:11.167 "state": "online", 00:13:11.167 "raid_level": "raid1", 00:13:11.167 "superblock": true, 00:13:11.167 "num_base_bdevs": 2, 00:13:11.167 "num_base_bdevs_discovered": 1, 00:13:11.167 "num_base_bdevs_operational": 1, 00:13:11.167 "base_bdevs_list": [ 00:13:11.167 { 00:13:11.167 "name": null, 00:13:11.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.167 "is_configured": false, 00:13:11.167 "data_offset": 0, 00:13:11.167 "data_size": 63488 00:13:11.167 }, 00:13:11.167 { 00:13:11.167 "name": "BaseBdev2", 00:13:11.167 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:11.167 "is_configured": true, 00:13:11.167 "data_offset": 2048, 00:13:11.167 "data_size": 63488 00:13:11.167 } 00:13:11.167 ] 00:13:11.167 }' 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.167 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.735 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.735 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.736 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.736 [2024-09-28 08:50:49.540633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.736 [2024-09-28 08:50:49.540855] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.736 [2024-09-28 08:50:49.540879] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.736 [2024-09-28 08:50:49.541319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.736 [2024-09-28 08:50:49.558112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:11.736 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.736 08:50:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:11.736 [2024-09-28 08:50:49.560311] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.673 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.674 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.674 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.674 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.674 "name": "raid_bdev1", 00:13:12.674 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:12.674 "strip_size_kb": 0, 00:13:12.674 "state": "online", 00:13:12.674 "raid_level": "raid1", 00:13:12.674 "superblock": true, 00:13:12.674 "num_base_bdevs": 2, 00:13:12.674 "num_base_bdevs_discovered": 2, 00:13:12.674 "num_base_bdevs_operational": 2, 00:13:12.674 "process": { 00:13:12.674 "type": "rebuild", 00:13:12.674 "target": "spare", 00:13:12.674 "progress": { 00:13:12.674 "blocks": 20480, 00:13:12.674 "percent": 32 00:13:12.674 } 00:13:12.674 }, 00:13:12.674 "base_bdevs_list": [ 00:13:12.674 { 00:13:12.674 "name": "spare", 00:13:12.674 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:12.674 "is_configured": true, 00:13:12.674 "data_offset": 2048, 00:13:12.674 "data_size": 63488 00:13:12.674 }, 00:13:12.674 { 00:13:12.674 "name": "BaseBdev2", 00:13:12.674 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:12.674 "is_configured": true, 00:13:12.674 "data_offset": 2048, 00:13:12.674 "data_size": 63488 00:13:12.674 } 00:13:12.674 ] 00:13:12.674 }' 00:13:12.674 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.932 [2024-09-28 08:50:50.703406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.932 [2024-09-28 08:50:50.768811] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.932 [2024-09-28 08:50:50.769226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.932 [2024-09-28 08:50:50.769247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.932 [2024-09-28 08:50:50.769258] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.932 "name": "raid_bdev1", 00:13:12.932 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:12.932 "strip_size_kb": 0, 00:13:12.932 "state": "online", 00:13:12.932 "raid_level": "raid1", 00:13:12.932 "superblock": true, 00:13:12.932 "num_base_bdevs": 2, 00:13:12.932 "num_base_bdevs_discovered": 1, 00:13:12.932 "num_base_bdevs_operational": 1, 00:13:12.932 "base_bdevs_list": [ 00:13:12.932 { 00:13:12.932 "name": null, 00:13:12.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.932 "is_configured": false, 00:13:12.932 "data_offset": 0, 00:13:12.932 "data_size": 63488 00:13:12.932 }, 00:13:12.932 { 00:13:12.932 "name": "BaseBdev2", 00:13:12.932 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:12.932 "is_configured": true, 00:13:12.932 "data_offset": 2048, 00:13:12.932 "data_size": 63488 00:13:12.932 } 00:13:12.932 ] 00:13:12.932 }' 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.932 08:50:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.499 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.499 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.499 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.499 [2024-09-28 08:50:51.273204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.499 [2024-09-28 08:50:51.273489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.499 [2024-09-28 08:50:51.273591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:13.499 [2024-09-28 08:50:51.273697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.499 [2024-09-28 08:50:51.274318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.499 [2024-09-28 08:50:51.274448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.499 [2024-09-28 08:50:51.274622] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:13.499 [2024-09-28 08:50:51.274680] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.499 [2024-09-28 08:50:51.274729] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.499 [2024-09-28 08:50:51.274849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.499 [2024-09-28 08:50:51.291611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:13.499 spare 00:13:13.500 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.500 08:50:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:13.500 [2024-09-28 08:50:51.293854] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.437 "name": "raid_bdev1", 00:13:14.437 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:14.437 "strip_size_kb": 0, 00:13:14.437 "state": "online", 00:13:14.437 "raid_level": "raid1", 00:13:14.437 "superblock": true, 00:13:14.437 "num_base_bdevs": 2, 00:13:14.437 "num_base_bdevs_discovered": 2, 00:13:14.437 "num_base_bdevs_operational": 2, 00:13:14.437 "process": { 00:13:14.437 "type": "rebuild", 00:13:14.437 "target": "spare", 00:13:14.437 "progress": { 00:13:14.437 "blocks": 20480, 00:13:14.437 "percent": 32 00:13:14.437 } 00:13:14.437 }, 00:13:14.437 "base_bdevs_list": [ 00:13:14.437 { 00:13:14.437 "name": "spare", 00:13:14.437 "uuid": "eb55242e-34f1-5e06-86ca-e4bd6c4d4b4c", 00:13:14.437 "is_configured": true, 00:13:14.437 "data_offset": 2048, 00:13:14.437 "data_size": 63488 00:13:14.437 }, 00:13:14.437 { 00:13:14.437 "name": "BaseBdev2", 00:13:14.437 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:14.437 "is_configured": true, 00:13:14.437 "data_offset": 2048, 00:13:14.437 "data_size": 63488 00:13:14.437 } 00:13:14.437 ] 00:13:14.437 }' 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.437 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.697 [2024-09-28 08:50:52.457074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.697 [2024-09-28 08:50:52.502663] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.697 [2024-09-28 08:50:52.503233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.697 [2024-09-28 08:50:52.503268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.697 [2024-09-28 08:50:52.503278] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.697 "name": "raid_bdev1", 00:13:14.697 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:14.697 "strip_size_kb": 0, 00:13:14.697 "state": "online", 00:13:14.697 "raid_level": "raid1", 00:13:14.697 "superblock": true, 00:13:14.697 "num_base_bdevs": 2, 00:13:14.697 "num_base_bdevs_discovered": 1, 00:13:14.697 "num_base_bdevs_operational": 1, 00:13:14.697 "base_bdevs_list": [ 00:13:14.697 { 00:13:14.697 "name": null, 00:13:14.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.697 "is_configured": false, 00:13:14.697 "data_offset": 0, 00:13:14.697 "data_size": 63488 00:13:14.697 }, 00:13:14.697 { 00:13:14.697 "name": "BaseBdev2", 00:13:14.697 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:14.697 "is_configured": true, 00:13:14.697 "data_offset": 2048, 00:13:14.697 "data_size": 63488 00:13:14.697 } 00:13:14.697 ] 00:13:14.697 }' 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.697 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.266 08:50:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.266 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.266 "name": "raid_bdev1", 00:13:15.266 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:15.266 "strip_size_kb": 0, 00:13:15.266 "state": "online", 00:13:15.266 "raid_level": "raid1", 00:13:15.266 "superblock": true, 00:13:15.266 "num_base_bdevs": 2, 00:13:15.266 "num_base_bdevs_discovered": 1, 00:13:15.266 "num_base_bdevs_operational": 1, 00:13:15.266 "base_bdevs_list": [ 00:13:15.266 { 00:13:15.266 "name": null, 00:13:15.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.266 "is_configured": false, 00:13:15.266 "data_offset": 0, 00:13:15.266 "data_size": 63488 00:13:15.266 }, 00:13:15.266 { 00:13:15.266 "name": "BaseBdev2", 00:13:15.266 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:15.266 "is_configured": true, 00:13:15.266 "data_offset": 2048, 00:13:15.266 "data_size": 63488 00:13:15.266 } 00:13:15.266 ] 00:13:15.266 }' 00:13:15.266 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.267 [2024-09-28 08:50:53.100929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.267 [2024-09-28 08:50:53.100987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.267 [2024-09-28 08:50:53.101016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:15.267 [2024-09-28 08:50:53.101025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.267 [2024-09-28 08:50:53.101545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.267 [2024-09-28 08:50:53.101574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.267 [2024-09-28 08:50:53.101674] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.267 [2024-09-28 08:50:53.101690] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.267 [2024-09-28 08:50:53.101700] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.267 [2024-09-28 08:50:53.101711] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.267 BaseBdev1 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.267 08:50:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.204 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.205 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.205 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.205 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.205 "name": "raid_bdev1", 00:13:16.205 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:16.205 "strip_size_kb": 0, 00:13:16.205 "state": "online", 00:13:16.205 "raid_level": "raid1", 00:13:16.205 "superblock": true, 00:13:16.205 "num_base_bdevs": 2, 00:13:16.205 "num_base_bdevs_discovered": 1, 00:13:16.205 "num_base_bdevs_operational": 1, 00:13:16.205 "base_bdevs_list": [ 00:13:16.205 { 00:13:16.205 "name": null, 00:13:16.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.205 "is_configured": false, 00:13:16.205 "data_offset": 0, 00:13:16.205 "data_size": 63488 00:13:16.205 }, 00:13:16.205 { 00:13:16.205 "name": "BaseBdev2", 00:13:16.205 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:16.205 "is_configured": true, 00:13:16.205 "data_offset": 2048, 00:13:16.205 "data_size": 63488 00:13:16.205 } 00:13:16.205 ] 00:13:16.205 }' 00:13:16.205 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.205 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.772 "name": "raid_bdev1", 00:13:16.772 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:16.772 "strip_size_kb": 0, 00:13:16.772 "state": "online", 00:13:16.772 "raid_level": "raid1", 00:13:16.772 "superblock": true, 00:13:16.772 "num_base_bdevs": 2, 00:13:16.772 "num_base_bdevs_discovered": 1, 00:13:16.772 "num_base_bdevs_operational": 1, 00:13:16.772 "base_bdevs_list": [ 00:13:16.772 { 00:13:16.772 "name": null, 00:13:16.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.772 "is_configured": false, 00:13:16.772 "data_offset": 0, 00:13:16.772 "data_size": 63488 00:13:16.772 }, 00:13:16.772 { 00:13:16.772 "name": "BaseBdev2", 00:13:16.772 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:16.772 "is_configured": true, 00:13:16.772 "data_offset": 2048, 00:13:16.772 "data_size": 63488 00:13:16.772 } 00:13:16.772 ] 00:13:16.772 }' 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.772 [2024-09-28 08:50:54.750389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.772 [2024-09-28 08:50:54.750587] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.772 [2024-09-28 08:50:54.750611] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.772 request: 00:13:16.772 { 00:13:16.772 "base_bdev": "BaseBdev1", 00:13:16.772 "raid_bdev": "raid_bdev1", 00:13:16.772 "method": "bdev_raid_add_base_bdev", 00:13:16.772 "req_id": 1 00:13:16.772 } 00:13:16.772 Got JSON-RPC error response 00:13:16.772 response: 00:13:16.772 { 00:13:16.772 "code": -22, 00:13:16.772 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:16.772 } 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.772 08:50:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.152 "name": "raid_bdev1", 00:13:18.152 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:18.152 "strip_size_kb": 0, 00:13:18.152 "state": "online", 00:13:18.152 "raid_level": "raid1", 00:13:18.152 "superblock": true, 00:13:18.152 "num_base_bdevs": 2, 00:13:18.152 "num_base_bdevs_discovered": 1, 00:13:18.152 "num_base_bdevs_operational": 1, 00:13:18.152 "base_bdevs_list": [ 00:13:18.152 { 00:13:18.152 "name": null, 00:13:18.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.152 "is_configured": false, 00:13:18.152 "data_offset": 0, 00:13:18.152 "data_size": 63488 00:13:18.152 }, 00:13:18.152 { 00:13:18.152 "name": "BaseBdev2", 00:13:18.152 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:18.152 "is_configured": true, 00:13:18.152 "data_offset": 2048, 00:13:18.152 "data_size": 63488 00:13:18.152 } 00:13:18.152 ] 00:13:18.152 }' 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.152 08:50:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.412 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.412 "name": "raid_bdev1", 00:13:18.412 "uuid": "0da78b42-8102-4627-bffd-ffaf6480ccd9", 00:13:18.412 "strip_size_kb": 0, 00:13:18.412 "state": "online", 00:13:18.412 "raid_level": "raid1", 00:13:18.412 "superblock": true, 00:13:18.412 "num_base_bdevs": 2, 00:13:18.412 "num_base_bdevs_discovered": 1, 00:13:18.412 "num_base_bdevs_operational": 1, 00:13:18.412 "base_bdevs_list": [ 00:13:18.412 { 00:13:18.412 "name": null, 00:13:18.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.412 "is_configured": false, 00:13:18.412 "data_offset": 0, 00:13:18.412 "data_size": 63488 00:13:18.412 }, 00:13:18.412 { 00:13:18.412 "name": "BaseBdev2", 00:13:18.412 "uuid": "7c781a40-cfb0-5150-86f7-1cce8d58c825", 00:13:18.412 "is_configured": true, 00:13:18.412 "data_offset": 2048, 00:13:18.412 "data_size": 63488 00:13:18.412 } 00:13:18.412 ] 00:13:18.412 }' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76830 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76830 ']' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76830 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76830 00:13:18.413 killing process with pid 76830 00:13:18.413 Received shutdown signal, test time was about 16.905580 seconds 00:13:18.413 00:13:18.413 Latency(us) 00:13:18.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.413 =================================================================================================================== 00:13:18.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76830' 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76830 00:13:18.413 [2024-09-28 08:50:56.346132] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.413 08:50:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76830 00:13:18.413 [2024-09-28 08:50:56.346286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.413 [2024-09-28 08:50:56.346352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.413 [2024-09-28 08:50:56.346364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:18.673 [2024-09-28 08:50:56.583297] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.081 08:50:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:20.081 00:13:20.081 real 0m20.275s 00:13:20.081 user 0m26.200s 00:13:20.081 sys 0m2.332s 00:13:20.081 08:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.081 08:50:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.081 ************************************ 00:13:20.081 END TEST raid_rebuild_test_sb_io 00:13:20.081 ************************************ 00:13:20.081 08:50:58 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:20.081 08:50:58 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:20.081 08:50:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:20.081 08:50:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.081 08:50:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.081 ************************************ 00:13:20.081 START TEST raid_rebuild_test 00:13:20.081 ************************************ 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:20.081 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77526 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77526 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77526 ']' 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.367 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.367 [2024-09-28 08:50:58.162762] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:20.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:20.367 Zero copy mechanism will not be used. 00:13:20.367 [2024-09-28 08:50:58.163488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77526 ] 00:13:20.368 [2024-09-28 08:50:58.330122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.646 [2024-09-28 08:50:58.572042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.905 [2024-09-28 08:50:58.797994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.905 [2024-09-28 08:50:58.798031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 08:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 BaseBdev1_malloc 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 [2024-09-28 08:50:59.031326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.165 [2024-09-28 08:50:59.031416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.165 [2024-09-28 08:50:59.031443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:21.165 [2024-09-28 08:50:59.031458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.165 [2024-09-28 08:50:59.033874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.165 [2024-09-28 08:50:59.033909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.165 BaseBdev1 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 BaseBdev2_malloc 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.165 [2024-09-28 08:50:59.120951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:21.165 [2024-09-28 08:50:59.121008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.165 [2024-09-28 08:50:59.121042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:21.165 [2024-09-28 08:50:59.121055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.165 [2024-09-28 08:50:59.123427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.165 [2024-09-28 08:50:59.123462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.165 BaseBdev2 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.165 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 BaseBdev3_malloc 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 [2024-09-28 08:50:59.183319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:21.425 [2024-09-28 08:50:59.183392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.425 [2024-09-28 08:50:59.183414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:21.425 [2024-09-28 08:50:59.183425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.425 [2024-09-28 08:50:59.185803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.425 [2024-09-28 08:50:59.185840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:21.425 BaseBdev3 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 BaseBdev4_malloc 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 [2024-09-28 08:50:59.244532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:21.425 [2024-09-28 08:50:59.244593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.425 [2024-09-28 08:50:59.244628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:21.425 [2024-09-28 08:50:59.244639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.425 [2024-09-28 08:50:59.246936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.425 [2024-09-28 08:50:59.246988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:21.425 BaseBdev4 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 spare_malloc 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 spare_delay 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 [2024-09-28 08:50:59.317068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:21.425 [2024-09-28 08:50:59.317126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.425 [2024-09-28 08:50:59.317160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:21.425 [2024-09-28 08:50:59.317171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.425 [2024-09-28 08:50:59.319495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.425 [2024-09-28 08:50:59.319530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:21.425 spare 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 [2024-09-28 08:50:59.329108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.425 [2024-09-28 08:50:59.331189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.425 [2024-09-28 08:50:59.331256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.425 [2024-09-28 08:50:59.331307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.425 [2024-09-28 08:50:59.331379] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:21.425 [2024-09-28 08:50:59.331390] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:21.425 [2024-09-28 08:50:59.331633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:21.425 [2024-09-28 08:50:59.331831] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:21.425 [2024-09-28 08:50:59.331842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:21.425 [2024-09-28 08:50:59.331992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.425 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.425 "name": "raid_bdev1", 00:13:21.425 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:21.425 "strip_size_kb": 0, 00:13:21.425 "state": "online", 00:13:21.425 "raid_level": "raid1", 00:13:21.425 "superblock": false, 00:13:21.425 "num_base_bdevs": 4, 00:13:21.425 "num_base_bdevs_discovered": 4, 00:13:21.425 "num_base_bdevs_operational": 4, 00:13:21.425 "base_bdevs_list": [ 00:13:21.425 { 00:13:21.425 "name": "BaseBdev1", 00:13:21.425 "uuid": "6008ab35-08df-5622-8d20-2e1a066eb227", 00:13:21.425 "is_configured": true, 00:13:21.425 "data_offset": 0, 00:13:21.425 "data_size": 65536 00:13:21.425 }, 00:13:21.425 { 00:13:21.425 "name": "BaseBdev2", 00:13:21.425 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:21.425 "is_configured": true, 00:13:21.425 "data_offset": 0, 00:13:21.426 "data_size": 65536 00:13:21.426 }, 00:13:21.426 { 00:13:21.426 "name": "BaseBdev3", 00:13:21.426 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:21.426 "is_configured": true, 00:13:21.426 "data_offset": 0, 00:13:21.426 "data_size": 65536 00:13:21.426 }, 00:13:21.426 { 00:13:21.426 "name": "BaseBdev4", 00:13:21.426 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:21.426 "is_configured": true, 00:13:21.426 "data_offset": 0, 00:13:21.426 "data_size": 65536 00:13:21.426 } 00:13:21.426 ] 00:13:21.426 }' 00:13:21.426 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.426 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:21.994 [2024-09-28 08:50:59.780625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:21.994 08:50:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:22.254 [2024-09-28 08:51:00.055917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:22.254 /dev/nbd0 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.254 1+0 records in 00:13:22.254 1+0 records out 00:13:22.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545544 s, 7.5 MB/s 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:22.254 08:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:27.532 65536+0 records in 00:13:27.532 65536+0 records out 00:13:27.532 33554432 bytes (34 MB, 32 MiB) copied, 5.33223 s, 6.3 MB/s 00:13:27.532 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:27.532 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.532 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:27.532 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.532 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:27.533 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.533 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.793 [2024-09-28 08:51:05.648332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.793 [2024-09-28 08:51:05.684380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.793 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.793 "name": "raid_bdev1", 00:13:27.793 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:27.793 "strip_size_kb": 0, 00:13:27.793 "state": "online", 00:13:27.793 "raid_level": "raid1", 00:13:27.793 "superblock": false, 00:13:27.793 "num_base_bdevs": 4, 00:13:27.793 "num_base_bdevs_discovered": 3, 00:13:27.793 "num_base_bdevs_operational": 3, 00:13:27.793 "base_bdevs_list": [ 00:13:27.793 { 00:13:27.793 "name": null, 00:13:27.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.793 "is_configured": false, 00:13:27.793 "data_offset": 0, 00:13:27.793 "data_size": 65536 00:13:27.793 }, 00:13:27.793 { 00:13:27.793 "name": "BaseBdev2", 00:13:27.793 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:27.793 "is_configured": true, 00:13:27.793 "data_offset": 0, 00:13:27.793 "data_size": 65536 00:13:27.793 }, 00:13:27.793 { 00:13:27.794 "name": "BaseBdev3", 00:13:27.794 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:27.794 "is_configured": true, 00:13:27.794 "data_offset": 0, 00:13:27.794 "data_size": 65536 00:13:27.794 }, 00:13:27.794 { 00:13:27.794 "name": "BaseBdev4", 00:13:27.794 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:27.794 "is_configured": true, 00:13:27.794 "data_offset": 0, 00:13:27.794 "data_size": 65536 00:13:27.794 } 00:13:27.794 ] 00:13:27.794 }' 00:13:27.794 08:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.794 08:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.361 08:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.361 08:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.361 08:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.361 [2024-09-28 08:51:06.175509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.361 [2024-09-28 08:51:06.190402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:28.361 08:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.361 08:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:28.361 [2024-09-28 08:51:06.192592] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.298 "name": "raid_bdev1", 00:13:29.298 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:29.298 "strip_size_kb": 0, 00:13:29.298 "state": "online", 00:13:29.298 "raid_level": "raid1", 00:13:29.298 "superblock": false, 00:13:29.298 "num_base_bdevs": 4, 00:13:29.298 "num_base_bdevs_discovered": 4, 00:13:29.298 "num_base_bdevs_operational": 4, 00:13:29.298 "process": { 00:13:29.298 "type": "rebuild", 00:13:29.298 "target": "spare", 00:13:29.298 "progress": { 00:13:29.298 "blocks": 20480, 00:13:29.298 "percent": 31 00:13:29.298 } 00:13:29.298 }, 00:13:29.298 "base_bdevs_list": [ 00:13:29.298 { 00:13:29.298 "name": "spare", 00:13:29.298 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:29.298 "is_configured": true, 00:13:29.298 "data_offset": 0, 00:13:29.298 "data_size": 65536 00:13:29.298 }, 00:13:29.298 { 00:13:29.298 "name": "BaseBdev2", 00:13:29.298 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:29.298 "is_configured": true, 00:13:29.298 "data_offset": 0, 00:13:29.298 "data_size": 65536 00:13:29.298 }, 00:13:29.298 { 00:13:29.298 "name": "BaseBdev3", 00:13:29.298 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:29.298 "is_configured": true, 00:13:29.298 "data_offset": 0, 00:13:29.298 "data_size": 65536 00:13:29.298 }, 00:13:29.298 { 00:13:29.298 "name": "BaseBdev4", 00:13:29.298 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:29.298 "is_configured": true, 00:13:29.298 "data_offset": 0, 00:13:29.298 "data_size": 65536 00:13:29.298 } 00:13:29.298 ] 00:13:29.298 }' 00:13:29.298 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.557 [2024-09-28 08:51:07.336466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.557 [2024-09-28 08:51:07.401146] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.557 [2024-09-28 08:51:07.401210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.557 [2024-09-28 08:51:07.401227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.557 [2024-09-28 08:51:07.401237] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.557 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.557 "name": "raid_bdev1", 00:13:29.557 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:29.557 "strip_size_kb": 0, 00:13:29.557 "state": "online", 00:13:29.557 "raid_level": "raid1", 00:13:29.557 "superblock": false, 00:13:29.557 "num_base_bdevs": 4, 00:13:29.557 "num_base_bdevs_discovered": 3, 00:13:29.557 "num_base_bdevs_operational": 3, 00:13:29.557 "base_bdevs_list": [ 00:13:29.557 { 00:13:29.557 "name": null, 00:13:29.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.558 "is_configured": false, 00:13:29.558 "data_offset": 0, 00:13:29.558 "data_size": 65536 00:13:29.558 }, 00:13:29.558 { 00:13:29.558 "name": "BaseBdev2", 00:13:29.558 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:29.558 "is_configured": true, 00:13:29.558 "data_offset": 0, 00:13:29.558 "data_size": 65536 00:13:29.558 }, 00:13:29.558 { 00:13:29.558 "name": "BaseBdev3", 00:13:29.558 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:29.558 "is_configured": true, 00:13:29.558 "data_offset": 0, 00:13:29.558 "data_size": 65536 00:13:29.558 }, 00:13:29.558 { 00:13:29.558 "name": "BaseBdev4", 00:13:29.558 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:29.558 "is_configured": true, 00:13:29.558 "data_offset": 0, 00:13:29.558 "data_size": 65536 00:13:29.558 } 00:13:29.558 ] 00:13:29.558 }' 00:13:29.558 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.558 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.127 "name": "raid_bdev1", 00:13:30.127 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:30.127 "strip_size_kb": 0, 00:13:30.127 "state": "online", 00:13:30.127 "raid_level": "raid1", 00:13:30.127 "superblock": false, 00:13:30.127 "num_base_bdevs": 4, 00:13:30.127 "num_base_bdevs_discovered": 3, 00:13:30.127 "num_base_bdevs_operational": 3, 00:13:30.127 "base_bdevs_list": [ 00:13:30.127 { 00:13:30.127 "name": null, 00:13:30.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.127 "is_configured": false, 00:13:30.127 "data_offset": 0, 00:13:30.127 "data_size": 65536 00:13:30.127 }, 00:13:30.127 { 00:13:30.127 "name": "BaseBdev2", 00:13:30.127 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:30.127 "is_configured": true, 00:13:30.127 "data_offset": 0, 00:13:30.127 "data_size": 65536 00:13:30.127 }, 00:13:30.127 { 00:13:30.127 "name": "BaseBdev3", 00:13:30.127 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:30.127 "is_configured": true, 00:13:30.127 "data_offset": 0, 00:13:30.127 "data_size": 65536 00:13:30.127 }, 00:13:30.127 { 00:13:30.127 "name": "BaseBdev4", 00:13:30.127 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:30.127 "is_configured": true, 00:13:30.127 "data_offset": 0, 00:13:30.127 "data_size": 65536 00:13:30.127 } 00:13:30.127 ] 00:13:30.127 }' 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.127 [2024-09-28 08:51:07.961671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.127 [2024-09-28 08:51:07.975732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.127 08:51:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:30.127 [2024-09-28 08:51:07.977849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.066 08:51:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.066 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.066 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.066 "name": "raid_bdev1", 00:13:31.066 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:31.066 "strip_size_kb": 0, 00:13:31.066 "state": "online", 00:13:31.066 "raid_level": "raid1", 00:13:31.066 "superblock": false, 00:13:31.066 "num_base_bdevs": 4, 00:13:31.066 "num_base_bdevs_discovered": 4, 00:13:31.066 "num_base_bdevs_operational": 4, 00:13:31.066 "process": { 00:13:31.066 "type": "rebuild", 00:13:31.066 "target": "spare", 00:13:31.066 "progress": { 00:13:31.066 "blocks": 20480, 00:13:31.066 "percent": 31 00:13:31.066 } 00:13:31.066 }, 00:13:31.066 "base_bdevs_list": [ 00:13:31.066 { 00:13:31.066 "name": "spare", 00:13:31.066 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:31.066 "is_configured": true, 00:13:31.066 "data_offset": 0, 00:13:31.066 "data_size": 65536 00:13:31.066 }, 00:13:31.066 { 00:13:31.066 "name": "BaseBdev2", 00:13:31.066 "uuid": "7726fd9f-0ff1-5454-bc79-a61ebabe76a3", 00:13:31.066 "is_configured": true, 00:13:31.067 "data_offset": 0, 00:13:31.067 "data_size": 65536 00:13:31.067 }, 00:13:31.067 { 00:13:31.067 "name": "BaseBdev3", 00:13:31.067 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:31.067 "is_configured": true, 00:13:31.067 "data_offset": 0, 00:13:31.067 "data_size": 65536 00:13:31.067 }, 00:13:31.067 { 00:13:31.067 "name": "BaseBdev4", 00:13:31.067 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:31.067 "is_configured": true, 00:13:31.067 "data_offset": 0, 00:13:31.067 "data_size": 65536 00:13:31.067 } 00:13:31.067 ] 00:13:31.067 }' 00:13:31.067 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.326 [2024-09-28 08:51:09.121697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.326 [2024-09-28 08:51:09.186351] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.326 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.327 "name": "raid_bdev1", 00:13:31.327 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:31.327 "strip_size_kb": 0, 00:13:31.327 "state": "online", 00:13:31.327 "raid_level": "raid1", 00:13:31.327 "superblock": false, 00:13:31.327 "num_base_bdevs": 4, 00:13:31.327 "num_base_bdevs_discovered": 3, 00:13:31.327 "num_base_bdevs_operational": 3, 00:13:31.327 "process": { 00:13:31.327 "type": "rebuild", 00:13:31.327 "target": "spare", 00:13:31.327 "progress": { 00:13:31.327 "blocks": 24576, 00:13:31.327 "percent": 37 00:13:31.327 } 00:13:31.327 }, 00:13:31.327 "base_bdevs_list": [ 00:13:31.327 { 00:13:31.327 "name": "spare", 00:13:31.327 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:31.327 "is_configured": true, 00:13:31.327 "data_offset": 0, 00:13:31.327 "data_size": 65536 00:13:31.327 }, 00:13:31.327 { 00:13:31.327 "name": null, 00:13:31.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.327 "is_configured": false, 00:13:31.327 "data_offset": 0, 00:13:31.327 "data_size": 65536 00:13:31.327 }, 00:13:31.327 { 00:13:31.327 "name": "BaseBdev3", 00:13:31.327 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:31.327 "is_configured": true, 00:13:31.327 "data_offset": 0, 00:13:31.327 "data_size": 65536 00:13:31.327 }, 00:13:31.327 { 00:13:31.327 "name": "BaseBdev4", 00:13:31.327 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:31.327 "is_configured": true, 00:13:31.327 "data_offset": 0, 00:13:31.327 "data_size": 65536 00:13:31.327 } 00:13:31.327 ] 00:13:31.327 }' 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.327 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.586 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.587 "name": "raid_bdev1", 00:13:31.587 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:31.587 "strip_size_kb": 0, 00:13:31.587 "state": "online", 00:13:31.587 "raid_level": "raid1", 00:13:31.587 "superblock": false, 00:13:31.587 "num_base_bdevs": 4, 00:13:31.587 "num_base_bdevs_discovered": 3, 00:13:31.587 "num_base_bdevs_operational": 3, 00:13:31.587 "process": { 00:13:31.587 "type": "rebuild", 00:13:31.587 "target": "spare", 00:13:31.587 "progress": { 00:13:31.587 "blocks": 26624, 00:13:31.587 "percent": 40 00:13:31.587 } 00:13:31.587 }, 00:13:31.587 "base_bdevs_list": [ 00:13:31.587 { 00:13:31.587 "name": "spare", 00:13:31.587 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:31.587 "is_configured": true, 00:13:31.587 "data_offset": 0, 00:13:31.587 "data_size": 65536 00:13:31.587 }, 00:13:31.587 { 00:13:31.587 "name": null, 00:13:31.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.587 "is_configured": false, 00:13:31.587 "data_offset": 0, 00:13:31.587 "data_size": 65536 00:13:31.587 }, 00:13:31.587 { 00:13:31.587 "name": "BaseBdev3", 00:13:31.587 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:31.587 "is_configured": true, 00:13:31.587 "data_offset": 0, 00:13:31.587 "data_size": 65536 00:13:31.587 }, 00:13:31.587 { 00:13:31.587 "name": "BaseBdev4", 00:13:31.587 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:31.587 "is_configured": true, 00:13:31.587 "data_offset": 0, 00:13:31.587 "data_size": 65536 00:13:31.587 } 00:13:31.587 ] 00:13:31.587 }' 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.587 08:51:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 08:51:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.786 "name": "raid_bdev1", 00:13:32.786 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:32.786 "strip_size_kb": 0, 00:13:32.786 "state": "online", 00:13:32.786 "raid_level": "raid1", 00:13:32.786 "superblock": false, 00:13:32.786 "num_base_bdevs": 4, 00:13:32.786 "num_base_bdevs_discovered": 3, 00:13:32.786 "num_base_bdevs_operational": 3, 00:13:32.786 "process": { 00:13:32.786 "type": "rebuild", 00:13:32.786 "target": "spare", 00:13:32.786 "progress": { 00:13:32.786 "blocks": 49152, 00:13:32.786 "percent": 75 00:13:32.786 } 00:13:32.786 }, 00:13:32.786 "base_bdevs_list": [ 00:13:32.786 { 00:13:32.786 "name": "spare", 00:13:32.786 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:32.786 "is_configured": true, 00:13:32.786 "data_offset": 0, 00:13:32.786 "data_size": 65536 00:13:32.786 }, 00:13:32.786 { 00:13:32.786 "name": null, 00:13:32.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.786 "is_configured": false, 00:13:32.786 "data_offset": 0, 00:13:32.786 "data_size": 65536 00:13:32.786 }, 00:13:32.786 { 00:13:32.786 "name": "BaseBdev3", 00:13:32.786 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:32.786 "is_configured": true, 00:13:32.786 "data_offset": 0, 00:13:32.786 "data_size": 65536 00:13:32.786 }, 00:13:32.786 { 00:13:32.786 "name": "BaseBdev4", 00:13:32.786 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:32.786 "is_configured": true, 00:13:32.786 "data_offset": 0, 00:13:32.786 "data_size": 65536 00:13:32.786 } 00:13:32.786 ] 00:13:32.786 }' 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.786 08:51:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.356 [2024-09-28 08:51:11.200478] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:33.356 [2024-09-28 08:51:11.200630] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:33.356 [2024-09-28 08:51:11.200701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.616 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.875 "name": "raid_bdev1", 00:13:33.875 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:33.875 "strip_size_kb": 0, 00:13:33.875 "state": "online", 00:13:33.875 "raid_level": "raid1", 00:13:33.875 "superblock": false, 00:13:33.875 "num_base_bdevs": 4, 00:13:33.875 "num_base_bdevs_discovered": 3, 00:13:33.875 "num_base_bdevs_operational": 3, 00:13:33.875 "base_bdevs_list": [ 00:13:33.875 { 00:13:33.875 "name": "spare", 00:13:33.875 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": null, 00:13:33.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.875 "is_configured": false, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": "BaseBdev3", 00:13:33.875 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": "BaseBdev4", 00:13:33.875 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 } 00:13:33.875 ] 00:13:33.875 }' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.875 "name": "raid_bdev1", 00:13:33.875 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:33.875 "strip_size_kb": 0, 00:13:33.875 "state": "online", 00:13:33.875 "raid_level": "raid1", 00:13:33.875 "superblock": false, 00:13:33.875 "num_base_bdevs": 4, 00:13:33.875 "num_base_bdevs_discovered": 3, 00:13:33.875 "num_base_bdevs_operational": 3, 00:13:33.875 "base_bdevs_list": [ 00:13:33.875 { 00:13:33.875 "name": "spare", 00:13:33.875 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": null, 00:13:33.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.875 "is_configured": false, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": "BaseBdev3", 00:13:33.875 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 }, 00:13:33.875 { 00:13:33.875 "name": "BaseBdev4", 00:13:33.875 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:33.875 "is_configured": true, 00:13:33.875 "data_offset": 0, 00:13:33.875 "data_size": 65536 00:13:33.875 } 00:13:33.875 ] 00:13:33.875 }' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.875 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.134 "name": "raid_bdev1", 00:13:34.134 "uuid": "738b808c-3df1-4f7c-af53-8e3460ab6512", 00:13:34.134 "strip_size_kb": 0, 00:13:34.134 "state": "online", 00:13:34.134 "raid_level": "raid1", 00:13:34.134 "superblock": false, 00:13:34.134 "num_base_bdevs": 4, 00:13:34.134 "num_base_bdevs_discovered": 3, 00:13:34.134 "num_base_bdevs_operational": 3, 00:13:34.134 "base_bdevs_list": [ 00:13:34.134 { 00:13:34.134 "name": "spare", 00:13:34.134 "uuid": "cd753532-d4ee-5765-9556-65492a907787", 00:13:34.134 "is_configured": true, 00:13:34.134 "data_offset": 0, 00:13:34.134 "data_size": 65536 00:13:34.134 }, 00:13:34.134 { 00:13:34.134 "name": null, 00:13:34.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.134 "is_configured": false, 00:13:34.134 "data_offset": 0, 00:13:34.134 "data_size": 65536 00:13:34.134 }, 00:13:34.134 { 00:13:34.134 "name": "BaseBdev3", 00:13:34.134 "uuid": "15cc11ea-3cb6-5b13-9a08-5f8cf86dfe5f", 00:13:34.134 "is_configured": true, 00:13:34.134 "data_offset": 0, 00:13:34.134 "data_size": 65536 00:13:34.134 }, 00:13:34.134 { 00:13:34.134 "name": "BaseBdev4", 00:13:34.134 "uuid": "0b3f99af-ea8a-5ef8-9477-3083a944f05e", 00:13:34.134 "is_configured": true, 00:13:34.134 "data_offset": 0, 00:13:34.134 "data_size": 65536 00:13:34.134 } 00:13:34.134 ] 00:13:34.134 }' 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.134 08:51:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 [2024-09-28 08:51:12.319949] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.394 [2024-09-28 08:51:12.320026] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.394 [2024-09-28 08:51:12.320141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.394 [2024-09-28 08:51:12.320264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.394 [2024-09-28 08:51:12.320309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:34.394 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.395 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:34.654 /dev/nbd0 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.654 1+0 records in 00:13:34.654 1+0 records out 00:13:34.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538577 s, 7.6 MB/s 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.654 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:34.914 /dev/nbd1 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.914 1+0 records in 00:13:34.914 1+0 records out 00:13:34.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419067 s, 9.8 MB/s 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.914 08:51:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.174 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.434 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77526 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77526 ']' 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77526 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77526 00:13:35.694 killing process with pid 77526 00:13:35.694 Received shutdown signal, test time was about 60.000000 seconds 00:13:35.694 00:13:35.694 Latency(us) 00:13:35.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.694 =================================================================================================================== 00:13:35.694 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77526' 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77526 00:13:35.694 [2024-09-28 08:51:13.522017] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.694 08:51:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77526 00:13:36.263 [2024-09-28 08:51:14.030916] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:37.643 00:13:37.643 real 0m17.289s 00:13:37.643 user 0m19.129s 00:13:37.643 sys 0m3.183s 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.643 ************************************ 00:13:37.643 END TEST raid_rebuild_test 00:13:37.643 ************************************ 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.643 08:51:15 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:37.643 08:51:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:37.643 08:51:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.643 08:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.643 ************************************ 00:13:37.643 START TEST raid_rebuild_test_sb 00:13:37.643 ************************************ 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77971 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77971 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77971 ']' 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.643 08:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.643 [2024-09-28 08:51:15.532382] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:37.643 [2024-09-28 08:51:15.532592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:37.643 Zero copy mechanism will not be used. 00:13:37.643 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77971 ] 00:13:37.902 [2024-09-28 08:51:15.700036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.175 [2024-09-28 08:51:15.947227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.454 [2024-09-28 08:51:16.185267] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.454 [2024-09-28 08:51:16.185399] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.454 BaseBdev1_malloc 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.454 [2024-09-28 08:51:16.409277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:38.454 [2024-09-28 08:51:16.409363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.454 [2024-09-28 08:51:16.409388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.454 [2024-09-28 08:51:16.409403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.454 [2024-09-28 08:51:16.411979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.454 [2024-09-28 08:51:16.412022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.454 BaseBdev1 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.454 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 BaseBdev2_malloc 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 [2024-09-28 08:51:16.479113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:38.713 [2024-09-28 08:51:16.479230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.713 [2024-09-28 08:51:16.479272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.713 [2024-09-28 08:51:16.479321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.713 [2024-09-28 08:51:16.481797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.713 [2024-09-28 08:51:16.481868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.713 BaseBdev2 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 BaseBdev3_malloc 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:38.713 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 [2024-09-28 08:51:16.542273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:38.714 [2024-09-28 08:51:16.542329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.714 [2024-09-28 08:51:16.542368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.714 [2024-09-28 08:51:16.542379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.714 [2024-09-28 08:51:16.544772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.714 [2024-09-28 08:51:16.544867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:38.714 BaseBdev3 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 BaseBdev4_malloc 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 [2024-09-28 08:51:16.603574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:38.714 [2024-09-28 08:51:16.603694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.714 [2024-09-28 08:51:16.603719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.714 [2024-09-28 08:51:16.603732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.714 [2024-09-28 08:51:16.606037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.714 [2024-09-28 08:51:16.606077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:38.714 BaseBdev4 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 spare_malloc 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 spare_delay 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 [2024-09-28 08:51:16.676568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.714 [2024-09-28 08:51:16.676643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.714 [2024-09-28 08:51:16.676663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:38.714 [2024-09-28 08:51:16.676687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.714 [2024-09-28 08:51:16.679003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.714 [2024-09-28 08:51:16.679080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.714 spare 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 [2024-09-28 08:51:16.688616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.714 [2024-09-28 08:51:16.690678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.714 [2024-09-28 08:51:16.690745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.714 [2024-09-28 08:51:16.690800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.714 [2024-09-28 08:51:16.690975] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.714 [2024-09-28 08:51:16.691010] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.714 [2024-09-28 08:51:16.691281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:38.714 [2024-09-28 08:51:16.691459] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.714 [2024-09-28 08:51:16.691470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:38.714 [2024-09-28 08:51:16.691623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.714 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.973 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.973 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.973 "name": "raid_bdev1", 00:13:38.973 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:38.973 "strip_size_kb": 0, 00:13:38.973 "state": "online", 00:13:38.973 "raid_level": "raid1", 00:13:38.973 "superblock": true, 00:13:38.973 "num_base_bdevs": 4, 00:13:38.973 "num_base_bdevs_discovered": 4, 00:13:38.973 "num_base_bdevs_operational": 4, 00:13:38.973 "base_bdevs_list": [ 00:13:38.973 { 00:13:38.973 "name": "BaseBdev1", 00:13:38.973 "uuid": "9b31f3d0-ce28-585a-8832-fa1d6f3fd789", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 2048, 00:13:38.973 "data_size": 63488 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev2", 00:13:38.973 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 2048, 00:13:38.973 "data_size": 63488 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev3", 00:13:38.973 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 2048, 00:13:38.973 "data_size": 63488 00:13:38.973 }, 00:13:38.973 { 00:13:38.973 "name": "BaseBdev4", 00:13:38.973 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:38.973 "is_configured": true, 00:13:38.973 "data_offset": 2048, 00:13:38.973 "data_size": 63488 00:13:38.973 } 00:13:38.973 ] 00:13:38.973 }' 00:13:38.973 08:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.974 08:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:39.233 [2024-09-28 08:51:17.148195] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.233 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:39.492 [2024-09-28 08:51:17.399487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:39.492 /dev/nbd0 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.492 1+0 records in 00:13:39.492 1+0 records out 00:13:39.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354454 s, 11.6 MB/s 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:39.492 08:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:46.062 63488+0 records in 00:13:46.062 63488+0 records out 00:13:46.062 32505856 bytes (33 MB, 31 MiB) copied, 5.61336 s, 5.8 MB/s 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:46.062 [2024-09-28 08:51:23.284053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.062 [2024-09-28 08:51:23.296141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.062 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.062 "name": "raid_bdev1", 00:13:46.062 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:46.062 "strip_size_kb": 0, 00:13:46.062 "state": "online", 00:13:46.062 "raid_level": "raid1", 00:13:46.062 "superblock": true, 00:13:46.062 "num_base_bdevs": 4, 00:13:46.062 "num_base_bdevs_discovered": 3, 00:13:46.062 "num_base_bdevs_operational": 3, 00:13:46.062 "base_bdevs_list": [ 00:13:46.062 { 00:13:46.062 "name": null, 00:13:46.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.062 "is_configured": false, 00:13:46.062 "data_offset": 0, 00:13:46.062 "data_size": 63488 00:13:46.062 }, 00:13:46.062 { 00:13:46.062 "name": "BaseBdev2", 00:13:46.062 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:46.062 "is_configured": true, 00:13:46.063 "data_offset": 2048, 00:13:46.063 "data_size": 63488 00:13:46.063 }, 00:13:46.063 { 00:13:46.063 "name": "BaseBdev3", 00:13:46.063 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:46.063 "is_configured": true, 00:13:46.063 "data_offset": 2048, 00:13:46.063 "data_size": 63488 00:13:46.063 }, 00:13:46.063 { 00:13:46.063 "name": "BaseBdev4", 00:13:46.063 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:46.063 "is_configured": true, 00:13:46.063 "data_offset": 2048, 00:13:46.063 "data_size": 63488 00:13:46.063 } 00:13:46.063 ] 00:13:46.063 }' 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.063 [2024-09-28 08:51:23.719388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.063 [2024-09-28 08:51:23.735440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.063 08:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:46.063 [2024-09-28 08:51:23.737633] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.005 "name": "raid_bdev1", 00:13:47.005 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:47.005 "strip_size_kb": 0, 00:13:47.005 "state": "online", 00:13:47.005 "raid_level": "raid1", 00:13:47.005 "superblock": true, 00:13:47.005 "num_base_bdevs": 4, 00:13:47.005 "num_base_bdevs_discovered": 4, 00:13:47.005 "num_base_bdevs_operational": 4, 00:13:47.005 "process": { 00:13:47.005 "type": "rebuild", 00:13:47.005 "target": "spare", 00:13:47.005 "progress": { 00:13:47.005 "blocks": 20480, 00:13:47.005 "percent": 32 00:13:47.005 } 00:13:47.005 }, 00:13:47.005 "base_bdevs_list": [ 00:13:47.005 { 00:13:47.005 "name": "spare", 00:13:47.005 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:47.005 "is_configured": true, 00:13:47.005 "data_offset": 2048, 00:13:47.005 "data_size": 63488 00:13:47.005 }, 00:13:47.005 { 00:13:47.005 "name": "BaseBdev2", 00:13:47.005 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:47.005 "is_configured": true, 00:13:47.005 "data_offset": 2048, 00:13:47.005 "data_size": 63488 00:13:47.005 }, 00:13:47.005 { 00:13:47.005 "name": "BaseBdev3", 00:13:47.005 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:47.005 "is_configured": true, 00:13:47.005 "data_offset": 2048, 00:13:47.005 "data_size": 63488 00:13:47.005 }, 00:13:47.005 { 00:13:47.005 "name": "BaseBdev4", 00:13:47.005 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:47.005 "is_configured": true, 00:13:47.005 "data_offset": 2048, 00:13:47.005 "data_size": 63488 00:13:47.005 } 00:13:47.005 ] 00:13:47.005 }' 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.005 [2024-09-28 08:51:24.897811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.005 [2024-09-28 08:51:24.946580] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.005 [2024-09-28 08:51:24.946645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.005 [2024-09-28 08:51:24.946689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.005 [2024-09-28 08:51:24.946700] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.005 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.006 08:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.006 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.006 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.006 08:51:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.265 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.265 "name": "raid_bdev1", 00:13:47.265 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:47.265 "strip_size_kb": 0, 00:13:47.265 "state": "online", 00:13:47.265 "raid_level": "raid1", 00:13:47.265 "superblock": true, 00:13:47.265 "num_base_bdevs": 4, 00:13:47.265 "num_base_bdevs_discovered": 3, 00:13:47.265 "num_base_bdevs_operational": 3, 00:13:47.265 "base_bdevs_list": [ 00:13:47.265 { 00:13:47.265 "name": null, 00:13:47.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.265 "is_configured": false, 00:13:47.265 "data_offset": 0, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev2", 00:13:47.265 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev3", 00:13:47.265 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev4", 00:13:47.265 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.266 } 00:13:47.266 ] 00:13:47.266 }' 00:13:47.266 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.266 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.525 "name": "raid_bdev1", 00:13:47.525 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:47.525 "strip_size_kb": 0, 00:13:47.525 "state": "online", 00:13:47.525 "raid_level": "raid1", 00:13:47.525 "superblock": true, 00:13:47.525 "num_base_bdevs": 4, 00:13:47.525 "num_base_bdevs_discovered": 3, 00:13:47.525 "num_base_bdevs_operational": 3, 00:13:47.525 "base_bdevs_list": [ 00:13:47.525 { 00:13:47.525 "name": null, 00:13:47.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.525 "is_configured": false, 00:13:47.525 "data_offset": 0, 00:13:47.525 "data_size": 63488 00:13:47.525 }, 00:13:47.525 { 00:13:47.525 "name": "BaseBdev2", 00:13:47.525 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:47.525 "is_configured": true, 00:13:47.525 "data_offset": 2048, 00:13:47.525 "data_size": 63488 00:13:47.525 }, 00:13:47.525 { 00:13:47.525 "name": "BaseBdev3", 00:13:47.525 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:47.525 "is_configured": true, 00:13:47.525 "data_offset": 2048, 00:13:47.525 "data_size": 63488 00:13:47.525 }, 00:13:47.525 { 00:13:47.525 "name": "BaseBdev4", 00:13:47.525 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:47.525 "is_configured": true, 00:13:47.525 "data_offset": 2048, 00:13:47.525 "data_size": 63488 00:13:47.525 } 00:13:47.525 ] 00:13:47.525 }' 00:13:47.525 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.784 [2024-09-28 08:51:25.567365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.784 [2024-09-28 08:51:25.581214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.784 08:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:47.784 [2024-09-28 08:51:25.583408] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.727 "name": "raid_bdev1", 00:13:48.727 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:48.727 "strip_size_kb": 0, 00:13:48.727 "state": "online", 00:13:48.727 "raid_level": "raid1", 00:13:48.727 "superblock": true, 00:13:48.727 "num_base_bdevs": 4, 00:13:48.727 "num_base_bdevs_discovered": 4, 00:13:48.727 "num_base_bdevs_operational": 4, 00:13:48.727 "process": { 00:13:48.727 "type": "rebuild", 00:13:48.727 "target": "spare", 00:13:48.727 "progress": { 00:13:48.727 "blocks": 20480, 00:13:48.727 "percent": 32 00:13:48.727 } 00:13:48.727 }, 00:13:48.727 "base_bdevs_list": [ 00:13:48.727 { 00:13:48.727 "name": "spare", 00:13:48.727 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:48.727 "is_configured": true, 00:13:48.727 "data_offset": 2048, 00:13:48.727 "data_size": 63488 00:13:48.727 }, 00:13:48.727 { 00:13:48.727 "name": "BaseBdev2", 00:13:48.727 "uuid": "17a4ecbe-fbaf-53d0-a988-24788ec7b9f0", 00:13:48.727 "is_configured": true, 00:13:48.727 "data_offset": 2048, 00:13:48.727 "data_size": 63488 00:13:48.727 }, 00:13:48.727 { 00:13:48.727 "name": "BaseBdev3", 00:13:48.727 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:48.727 "is_configured": true, 00:13:48.727 "data_offset": 2048, 00:13:48.727 "data_size": 63488 00:13:48.727 }, 00:13:48.727 { 00:13:48.727 "name": "BaseBdev4", 00:13:48.727 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:48.727 "is_configured": true, 00:13:48.727 "data_offset": 2048, 00:13:48.727 "data_size": 63488 00:13:48.727 } 00:13:48.727 ] 00:13:48.727 }' 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.727 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:48.987 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.987 [2024-09-28 08:51:26.743482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:48.987 [2024-09-28 08:51:26.892064] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.987 "name": "raid_bdev1", 00:13:48.987 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:48.987 "strip_size_kb": 0, 00:13:48.987 "state": "online", 00:13:48.987 "raid_level": "raid1", 00:13:48.987 "superblock": true, 00:13:48.987 "num_base_bdevs": 4, 00:13:48.987 "num_base_bdevs_discovered": 3, 00:13:48.987 "num_base_bdevs_operational": 3, 00:13:48.987 "process": { 00:13:48.987 "type": "rebuild", 00:13:48.987 "target": "spare", 00:13:48.987 "progress": { 00:13:48.987 "blocks": 24576, 00:13:48.987 "percent": 38 00:13:48.987 } 00:13:48.987 }, 00:13:48.987 "base_bdevs_list": [ 00:13:48.987 { 00:13:48.987 "name": "spare", 00:13:48.987 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:48.987 "is_configured": true, 00:13:48.987 "data_offset": 2048, 00:13:48.987 "data_size": 63488 00:13:48.987 }, 00:13:48.987 { 00:13:48.987 "name": null, 00:13:48.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.987 "is_configured": false, 00:13:48.987 "data_offset": 0, 00:13:48.987 "data_size": 63488 00:13:48.987 }, 00:13:48.987 { 00:13:48.987 "name": "BaseBdev3", 00:13:48.987 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:48.987 "is_configured": true, 00:13:48.987 "data_offset": 2048, 00:13:48.987 "data_size": 63488 00:13:48.987 }, 00:13:48.987 { 00:13:48.987 "name": "BaseBdev4", 00:13:48.987 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:48.987 "is_configured": true, 00:13:48.987 "data_offset": 2048, 00:13:48.987 "data_size": 63488 00:13:48.987 } 00:13:48.987 ] 00:13:48.987 }' 00:13:48.987 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.247 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.248 08:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=467 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.248 "name": "raid_bdev1", 00:13:49.248 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:49.248 "strip_size_kb": 0, 00:13:49.248 "state": "online", 00:13:49.248 "raid_level": "raid1", 00:13:49.248 "superblock": true, 00:13:49.248 "num_base_bdevs": 4, 00:13:49.248 "num_base_bdevs_discovered": 3, 00:13:49.248 "num_base_bdevs_operational": 3, 00:13:49.248 "process": { 00:13:49.248 "type": "rebuild", 00:13:49.248 "target": "spare", 00:13:49.248 "progress": { 00:13:49.248 "blocks": 26624, 00:13:49.248 "percent": 41 00:13:49.248 } 00:13:49.248 }, 00:13:49.248 "base_bdevs_list": [ 00:13:49.248 { 00:13:49.248 "name": "spare", 00:13:49.248 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:49.248 "is_configured": true, 00:13:49.248 "data_offset": 2048, 00:13:49.248 "data_size": 63488 00:13:49.248 }, 00:13:49.248 { 00:13:49.248 "name": null, 00:13:49.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.248 "is_configured": false, 00:13:49.248 "data_offset": 0, 00:13:49.248 "data_size": 63488 00:13:49.248 }, 00:13:49.248 { 00:13:49.248 "name": "BaseBdev3", 00:13:49.248 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:49.248 "is_configured": true, 00:13:49.248 "data_offset": 2048, 00:13:49.248 "data_size": 63488 00:13:49.248 }, 00:13:49.248 { 00:13:49.248 "name": "BaseBdev4", 00:13:49.248 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:49.248 "is_configured": true, 00:13:49.248 "data_offset": 2048, 00:13:49.248 "data_size": 63488 00:13:49.248 } 00:13:49.248 ] 00:13:49.248 }' 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.248 08:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.628 "name": "raid_bdev1", 00:13:50.628 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:50.628 "strip_size_kb": 0, 00:13:50.628 "state": "online", 00:13:50.628 "raid_level": "raid1", 00:13:50.628 "superblock": true, 00:13:50.628 "num_base_bdevs": 4, 00:13:50.628 "num_base_bdevs_discovered": 3, 00:13:50.628 "num_base_bdevs_operational": 3, 00:13:50.628 "process": { 00:13:50.628 "type": "rebuild", 00:13:50.628 "target": "spare", 00:13:50.628 "progress": { 00:13:50.628 "blocks": 51200, 00:13:50.628 "percent": 80 00:13:50.628 } 00:13:50.628 }, 00:13:50.628 "base_bdevs_list": [ 00:13:50.628 { 00:13:50.628 "name": "spare", 00:13:50.628 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:50.628 "is_configured": true, 00:13:50.628 "data_offset": 2048, 00:13:50.628 "data_size": 63488 00:13:50.628 }, 00:13:50.628 { 00:13:50.628 "name": null, 00:13:50.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.628 "is_configured": false, 00:13:50.628 "data_offset": 0, 00:13:50.628 "data_size": 63488 00:13:50.628 }, 00:13:50.628 { 00:13:50.628 "name": "BaseBdev3", 00:13:50.628 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:50.628 "is_configured": true, 00:13:50.628 "data_offset": 2048, 00:13:50.628 "data_size": 63488 00:13:50.628 }, 00:13:50.628 { 00:13:50.628 "name": "BaseBdev4", 00:13:50.628 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:50.628 "is_configured": true, 00:13:50.628 "data_offset": 2048, 00:13:50.628 "data_size": 63488 00:13:50.628 } 00:13:50.628 ] 00:13:50.628 }' 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.628 08:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.888 [2024-09-28 08:51:28.805981] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:50.888 [2024-09-28 08:51:28.806053] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:50.888 [2024-09-28 08:51:28.806171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.457 "name": "raid_bdev1", 00:13:51.457 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:51.457 "strip_size_kb": 0, 00:13:51.457 "state": "online", 00:13:51.457 "raid_level": "raid1", 00:13:51.457 "superblock": true, 00:13:51.457 "num_base_bdevs": 4, 00:13:51.457 "num_base_bdevs_discovered": 3, 00:13:51.457 "num_base_bdevs_operational": 3, 00:13:51.457 "base_bdevs_list": [ 00:13:51.457 { 00:13:51.457 "name": "spare", 00:13:51.457 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:51.457 "is_configured": true, 00:13:51.457 "data_offset": 2048, 00:13:51.457 "data_size": 63488 00:13:51.457 }, 00:13:51.457 { 00:13:51.457 "name": null, 00:13:51.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.457 "is_configured": false, 00:13:51.457 "data_offset": 0, 00:13:51.457 "data_size": 63488 00:13:51.457 }, 00:13:51.457 { 00:13:51.457 "name": "BaseBdev3", 00:13:51.457 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:51.457 "is_configured": true, 00:13:51.457 "data_offset": 2048, 00:13:51.457 "data_size": 63488 00:13:51.457 }, 00:13:51.457 { 00:13:51.457 "name": "BaseBdev4", 00:13:51.457 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:51.457 "is_configured": true, 00:13:51.457 "data_offset": 2048, 00:13:51.457 "data_size": 63488 00:13:51.457 } 00:13:51.457 ] 00:13:51.457 }' 00:13:51.457 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.716 "name": "raid_bdev1", 00:13:51.716 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:51.716 "strip_size_kb": 0, 00:13:51.716 "state": "online", 00:13:51.716 "raid_level": "raid1", 00:13:51.716 "superblock": true, 00:13:51.716 "num_base_bdevs": 4, 00:13:51.716 "num_base_bdevs_discovered": 3, 00:13:51.716 "num_base_bdevs_operational": 3, 00:13:51.716 "base_bdevs_list": [ 00:13:51.716 { 00:13:51.716 "name": "spare", 00:13:51.716 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": null, 00:13:51.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.716 "is_configured": false, 00:13:51.716 "data_offset": 0, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": "BaseBdev3", 00:13:51.716 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": "BaseBdev4", 00:13:51.716 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 } 00:13:51.716 ] 00:13:51.716 }' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.716 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.716 "name": "raid_bdev1", 00:13:51.716 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:51.716 "strip_size_kb": 0, 00:13:51.716 "state": "online", 00:13:51.716 "raid_level": "raid1", 00:13:51.716 "superblock": true, 00:13:51.716 "num_base_bdevs": 4, 00:13:51.716 "num_base_bdevs_discovered": 3, 00:13:51.716 "num_base_bdevs_operational": 3, 00:13:51.716 "base_bdevs_list": [ 00:13:51.716 { 00:13:51.716 "name": "spare", 00:13:51.716 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": null, 00:13:51.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.716 "is_configured": false, 00:13:51.716 "data_offset": 0, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": "BaseBdev3", 00:13:51.716 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 }, 00:13:51.716 { 00:13:51.716 "name": "BaseBdev4", 00:13:51.716 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:51.716 "is_configured": true, 00:13:51.716 "data_offset": 2048, 00:13:51.716 "data_size": 63488 00:13:51.716 } 00:13:51.716 ] 00:13:51.717 }' 00:13:51.717 08:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.717 08:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.285 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.285 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.285 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.286 [2024-09-28 08:51:30.077125] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.286 [2024-09-28 08:51:30.077215] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.286 [2024-09-28 08:51:30.077342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.286 [2024-09-28 08:51:30.077463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.286 [2024-09-28 08:51:30.077524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:52.286 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:52.546 /dev/nbd0 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.546 1+0 records in 00:13:52.546 1+0 records out 00:13:52.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303612 s, 13.5 MB/s 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:52.546 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:52.806 /dev/nbd1 00:13:52.806 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.807 1+0 records in 00:13:52.807 1+0 records out 00:13:52.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515706 s, 7.9 MB/s 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:52.807 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:53.066 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:53.066 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.067 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:53.067 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:53.067 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:53.067 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.067 08:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:53.067 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.326 [2024-09-28 08:51:31.273684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:53.326 [2024-09-28 08:51:31.273785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.326 [2024-09-28 08:51:31.273816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:53.326 [2024-09-28 08:51:31.273826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.326 [2024-09-28 08:51:31.276397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.326 [2024-09-28 08:51:31.276474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:53.326 [2024-09-28 08:51:31.276594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:53.326 [2024-09-28 08:51:31.276668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.326 [2024-09-28 08:51:31.276825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.326 [2024-09-28 08:51:31.276930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.326 spare 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.326 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.586 [2024-09-28 08:51:31.376831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:53.586 [2024-09-28 08:51:31.376908] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:53.586 [2024-09-28 08:51:31.377298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:53.586 [2024-09-28 08:51:31.377531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:53.586 [2024-09-28 08:51:31.377580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:53.586 [2024-09-28 08:51:31.377838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.586 "name": "raid_bdev1", 00:13:53.586 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:53.586 "strip_size_kb": 0, 00:13:53.586 "state": "online", 00:13:53.586 "raid_level": "raid1", 00:13:53.586 "superblock": true, 00:13:53.586 "num_base_bdevs": 4, 00:13:53.586 "num_base_bdevs_discovered": 3, 00:13:53.586 "num_base_bdevs_operational": 3, 00:13:53.586 "base_bdevs_list": [ 00:13:53.586 { 00:13:53.586 "name": "spare", 00:13:53.586 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": null, 00:13:53.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.586 "is_configured": false, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev3", 00:13:53.586 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev4", 00:13:53.586 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 2048, 00:13:53.586 "data_size": 63488 00:13:53.586 } 00:13:53.586 ] 00:13:53.586 }' 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.586 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.845 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.845 "name": "raid_bdev1", 00:13:53.845 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:53.845 "strip_size_kb": 0, 00:13:53.845 "state": "online", 00:13:53.845 "raid_level": "raid1", 00:13:53.845 "superblock": true, 00:13:53.846 "num_base_bdevs": 4, 00:13:53.846 "num_base_bdevs_discovered": 3, 00:13:53.846 "num_base_bdevs_operational": 3, 00:13:53.846 "base_bdevs_list": [ 00:13:53.846 { 00:13:53.846 "name": "spare", 00:13:53.846 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 2048, 00:13:53.846 "data_size": 63488 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": null, 00:13:53.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.846 "is_configured": false, 00:13:53.846 "data_offset": 2048, 00:13:53.846 "data_size": 63488 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": "BaseBdev3", 00:13:53.846 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 2048, 00:13:53.846 "data_size": 63488 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": "BaseBdev4", 00:13:53.846 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 2048, 00:13:53.846 "data_size": 63488 00:13:53.846 } 00:13:53.846 ] 00:13:53.846 }' 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.106 [2024-09-28 08:51:31.992749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.106 08:51:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.106 "name": "raid_bdev1", 00:13:54.106 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:54.106 "strip_size_kb": 0, 00:13:54.106 "state": "online", 00:13:54.106 "raid_level": "raid1", 00:13:54.106 "superblock": true, 00:13:54.106 "num_base_bdevs": 4, 00:13:54.106 "num_base_bdevs_discovered": 2, 00:13:54.106 "num_base_bdevs_operational": 2, 00:13:54.106 "base_bdevs_list": [ 00:13:54.106 { 00:13:54.106 "name": null, 00:13:54.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.106 "is_configured": false, 00:13:54.106 "data_offset": 0, 00:13:54.106 "data_size": 63488 00:13:54.106 }, 00:13:54.106 { 00:13:54.106 "name": null, 00:13:54.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.106 "is_configured": false, 00:13:54.106 "data_offset": 2048, 00:13:54.106 "data_size": 63488 00:13:54.106 }, 00:13:54.106 { 00:13:54.106 "name": "BaseBdev3", 00:13:54.106 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:54.106 "is_configured": true, 00:13:54.106 "data_offset": 2048, 00:13:54.106 "data_size": 63488 00:13:54.106 }, 00:13:54.106 { 00:13:54.106 "name": "BaseBdev4", 00:13:54.106 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:54.106 "is_configured": true, 00:13:54.106 "data_offset": 2048, 00:13:54.106 "data_size": 63488 00:13:54.106 } 00:13:54.106 ] 00:13:54.106 }' 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.106 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.703 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.703 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.703 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.703 [2024-09-28 08:51:32.439999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.703 [2024-09-28 08:51:32.440292] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:54.703 [2024-09-28 08:51:32.440354] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:54.703 [2024-09-28 08:51:32.440420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.703 [2024-09-28 08:51:32.454061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:54.703 08:51:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.703 08:51:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:54.703 [2024-09-28 08:51:32.456312] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.661 "name": "raid_bdev1", 00:13:55.661 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:55.661 "strip_size_kb": 0, 00:13:55.661 "state": "online", 00:13:55.661 "raid_level": "raid1", 00:13:55.661 "superblock": true, 00:13:55.661 "num_base_bdevs": 4, 00:13:55.661 "num_base_bdevs_discovered": 3, 00:13:55.661 "num_base_bdevs_operational": 3, 00:13:55.661 "process": { 00:13:55.661 "type": "rebuild", 00:13:55.661 "target": "spare", 00:13:55.661 "progress": { 00:13:55.661 "blocks": 20480, 00:13:55.661 "percent": 32 00:13:55.661 } 00:13:55.661 }, 00:13:55.661 "base_bdevs_list": [ 00:13:55.661 { 00:13:55.661 "name": "spare", 00:13:55.661 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": null, 00:13:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.661 "is_configured": false, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": "BaseBdev3", 00:13:55.661 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 }, 00:13:55.661 { 00:13:55.661 "name": "BaseBdev4", 00:13:55.661 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:55.661 "is_configured": true, 00:13:55.661 "data_offset": 2048, 00:13:55.661 "data_size": 63488 00:13:55.661 } 00:13:55.661 ] 00:13:55.661 }' 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.661 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.661 [2024-09-28 08:51:33.600952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.921 [2024-09-28 08:51:33.665605] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.921 [2024-09-28 08:51:33.665673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.921 [2024-09-28 08:51:33.665693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.921 [2024-09-28 08:51:33.665701] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.921 "name": "raid_bdev1", 00:13:55.921 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:55.921 "strip_size_kb": 0, 00:13:55.921 "state": "online", 00:13:55.921 "raid_level": "raid1", 00:13:55.921 "superblock": true, 00:13:55.921 "num_base_bdevs": 4, 00:13:55.921 "num_base_bdevs_discovered": 2, 00:13:55.921 "num_base_bdevs_operational": 2, 00:13:55.921 "base_bdevs_list": [ 00:13:55.921 { 00:13:55.921 "name": null, 00:13:55.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.921 "is_configured": false, 00:13:55.921 "data_offset": 0, 00:13:55.921 "data_size": 63488 00:13:55.921 }, 00:13:55.921 { 00:13:55.921 "name": null, 00:13:55.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.921 "is_configured": false, 00:13:55.921 "data_offset": 2048, 00:13:55.921 "data_size": 63488 00:13:55.921 }, 00:13:55.921 { 00:13:55.921 "name": "BaseBdev3", 00:13:55.921 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:55.921 "is_configured": true, 00:13:55.921 "data_offset": 2048, 00:13:55.921 "data_size": 63488 00:13:55.921 }, 00:13:55.921 { 00:13:55.921 "name": "BaseBdev4", 00:13:55.921 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:55.921 "is_configured": true, 00:13:55.921 "data_offset": 2048, 00:13:55.921 "data_size": 63488 00:13:55.921 } 00:13:55.921 ] 00:13:55.921 }' 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.921 08:51:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.181 08:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:56.181 08:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.181 08:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.181 [2024-09-28 08:51:34.054235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:56.181 [2024-09-28 08:51:34.054307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.181 [2024-09-28 08:51:34.054340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:56.181 [2024-09-28 08:51:34.054350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.181 [2024-09-28 08:51:34.054925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.181 [2024-09-28 08:51:34.054944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:56.181 [2024-09-28 08:51:34.055051] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:56.181 [2024-09-28 08:51:34.055064] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:56.181 [2024-09-28 08:51:34.055080] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:56.181 [2024-09-28 08:51:34.055105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.181 [2024-09-28 08:51:34.068943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:56.181 spare 00:13:56.181 08:51:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.181 08:51:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:56.181 [2024-09-28 08:51:34.071134] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.121 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.381 "name": "raid_bdev1", 00:13:57.381 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:57.381 "strip_size_kb": 0, 00:13:57.381 "state": "online", 00:13:57.381 "raid_level": "raid1", 00:13:57.381 "superblock": true, 00:13:57.381 "num_base_bdevs": 4, 00:13:57.381 "num_base_bdevs_discovered": 3, 00:13:57.381 "num_base_bdevs_operational": 3, 00:13:57.381 "process": { 00:13:57.381 "type": "rebuild", 00:13:57.381 "target": "spare", 00:13:57.381 "progress": { 00:13:57.381 "blocks": 20480, 00:13:57.381 "percent": 32 00:13:57.381 } 00:13:57.381 }, 00:13:57.381 "base_bdevs_list": [ 00:13:57.381 { 00:13:57.381 "name": "spare", 00:13:57.381 "uuid": "8dcae057-2350-5ff9-aa12-3b61f0ee52a5", 00:13:57.381 "is_configured": true, 00:13:57.381 "data_offset": 2048, 00:13:57.381 "data_size": 63488 00:13:57.381 }, 00:13:57.381 { 00:13:57.381 "name": null, 00:13:57.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.381 "is_configured": false, 00:13:57.381 "data_offset": 2048, 00:13:57.381 "data_size": 63488 00:13:57.381 }, 00:13:57.381 { 00:13:57.381 "name": "BaseBdev3", 00:13:57.381 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:57.381 "is_configured": true, 00:13:57.381 "data_offset": 2048, 00:13:57.381 "data_size": 63488 00:13:57.381 }, 00:13:57.381 { 00:13:57.381 "name": "BaseBdev4", 00:13:57.381 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:57.381 "is_configured": true, 00:13:57.381 "data_offset": 2048, 00:13:57.381 "data_size": 63488 00:13:57.381 } 00:13:57.381 ] 00:13:57.381 }' 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.381 [2024-09-28 08:51:35.207183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.381 [2024-09-28 08:51:35.279856] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:57.381 [2024-09-28 08:51:35.279966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.381 [2024-09-28 08:51:35.280004] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.381 [2024-09-28 08:51:35.280029] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.381 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.382 "name": "raid_bdev1", 00:13:57.382 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:57.382 "strip_size_kb": 0, 00:13:57.382 "state": "online", 00:13:57.382 "raid_level": "raid1", 00:13:57.382 "superblock": true, 00:13:57.382 "num_base_bdevs": 4, 00:13:57.382 "num_base_bdevs_discovered": 2, 00:13:57.382 "num_base_bdevs_operational": 2, 00:13:57.382 "base_bdevs_list": [ 00:13:57.382 { 00:13:57.382 "name": null, 00:13:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.382 "is_configured": false, 00:13:57.382 "data_offset": 0, 00:13:57.382 "data_size": 63488 00:13:57.382 }, 00:13:57.382 { 00:13:57.382 "name": null, 00:13:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.382 "is_configured": false, 00:13:57.382 "data_offset": 2048, 00:13:57.382 "data_size": 63488 00:13:57.382 }, 00:13:57.382 { 00:13:57.382 "name": "BaseBdev3", 00:13:57.382 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:57.382 "is_configured": true, 00:13:57.382 "data_offset": 2048, 00:13:57.382 "data_size": 63488 00:13:57.382 }, 00:13:57.382 { 00:13:57.382 "name": "BaseBdev4", 00:13:57.382 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:57.382 "is_configured": true, 00:13:57.382 "data_offset": 2048, 00:13:57.382 "data_size": 63488 00:13:57.382 } 00:13:57.382 ] 00:13:57.382 }' 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.382 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.952 "name": "raid_bdev1", 00:13:57.952 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:57.952 "strip_size_kb": 0, 00:13:57.952 "state": "online", 00:13:57.952 "raid_level": "raid1", 00:13:57.952 "superblock": true, 00:13:57.952 "num_base_bdevs": 4, 00:13:57.952 "num_base_bdevs_discovered": 2, 00:13:57.952 "num_base_bdevs_operational": 2, 00:13:57.952 "base_bdevs_list": [ 00:13:57.952 { 00:13:57.952 "name": null, 00:13:57.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.952 "is_configured": false, 00:13:57.952 "data_offset": 0, 00:13:57.952 "data_size": 63488 00:13:57.952 }, 00:13:57.952 { 00:13:57.952 "name": null, 00:13:57.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.952 "is_configured": false, 00:13:57.952 "data_offset": 2048, 00:13:57.952 "data_size": 63488 00:13:57.952 }, 00:13:57.952 { 00:13:57.952 "name": "BaseBdev3", 00:13:57.952 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:57.952 "is_configured": true, 00:13:57.952 "data_offset": 2048, 00:13:57.952 "data_size": 63488 00:13:57.952 }, 00:13:57.952 { 00:13:57.952 "name": "BaseBdev4", 00:13:57.952 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:57.952 "is_configured": true, 00:13:57.952 "data_offset": 2048, 00:13:57.952 "data_size": 63488 00:13:57.952 } 00:13:57.952 ] 00:13:57.952 }' 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.952 [2024-09-28 08:51:35.928760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.952 [2024-09-28 08:51:35.928826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.952 [2024-09-28 08:51:35.928847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:57.952 [2024-09-28 08:51:35.928860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.952 [2024-09-28 08:51:35.929372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.952 [2024-09-28 08:51:35.929393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.952 [2024-09-28 08:51:35.929477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:57.952 [2024-09-28 08:51:35.929495] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:57.952 [2024-09-28 08:51:35.929503] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:57.952 [2024-09-28 08:51:35.929520] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:57.952 BaseBdev1 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.952 08:51:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.335 "name": "raid_bdev1", 00:13:59.335 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:59.335 "strip_size_kb": 0, 00:13:59.335 "state": "online", 00:13:59.335 "raid_level": "raid1", 00:13:59.335 "superblock": true, 00:13:59.335 "num_base_bdevs": 4, 00:13:59.335 "num_base_bdevs_discovered": 2, 00:13:59.335 "num_base_bdevs_operational": 2, 00:13:59.335 "base_bdevs_list": [ 00:13:59.335 { 00:13:59.335 "name": null, 00:13:59.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.335 "is_configured": false, 00:13:59.335 "data_offset": 0, 00:13:59.335 "data_size": 63488 00:13:59.335 }, 00:13:59.335 { 00:13:59.335 "name": null, 00:13:59.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.335 "is_configured": false, 00:13:59.335 "data_offset": 2048, 00:13:59.335 "data_size": 63488 00:13:59.335 }, 00:13:59.335 { 00:13:59.335 "name": "BaseBdev3", 00:13:59.335 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:59.335 "is_configured": true, 00:13:59.335 "data_offset": 2048, 00:13:59.335 "data_size": 63488 00:13:59.335 }, 00:13:59.335 { 00:13:59.335 "name": "BaseBdev4", 00:13:59.335 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:59.335 "is_configured": true, 00:13:59.335 "data_offset": 2048, 00:13:59.335 "data_size": 63488 00:13:59.335 } 00:13:59.335 ] 00:13:59.335 }' 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.335 08:51:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.595 "name": "raid_bdev1", 00:13:59.595 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:13:59.595 "strip_size_kb": 0, 00:13:59.595 "state": "online", 00:13:59.595 "raid_level": "raid1", 00:13:59.595 "superblock": true, 00:13:59.595 "num_base_bdevs": 4, 00:13:59.595 "num_base_bdevs_discovered": 2, 00:13:59.595 "num_base_bdevs_operational": 2, 00:13:59.595 "base_bdevs_list": [ 00:13:59.595 { 00:13:59.595 "name": null, 00:13:59.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.595 "is_configured": false, 00:13:59.595 "data_offset": 0, 00:13:59.595 "data_size": 63488 00:13:59.595 }, 00:13:59.595 { 00:13:59.595 "name": null, 00:13:59.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.595 "is_configured": false, 00:13:59.595 "data_offset": 2048, 00:13:59.595 "data_size": 63488 00:13:59.595 }, 00:13:59.595 { 00:13:59.595 "name": "BaseBdev3", 00:13:59.595 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:13:59.595 "is_configured": true, 00:13:59.595 "data_offset": 2048, 00:13:59.595 "data_size": 63488 00:13:59.595 }, 00:13:59.595 { 00:13:59.595 "name": "BaseBdev4", 00:13:59.595 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:13:59.595 "is_configured": true, 00:13:59.595 "data_offset": 2048, 00:13:59.595 "data_size": 63488 00:13:59.595 } 00:13:59.595 ] 00:13:59.595 }' 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.595 [2024-09-28 08:51:37.506264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.595 [2024-09-28 08:51:37.506502] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:59.595 [2024-09-28 08:51:37.506517] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:59.595 request: 00:13:59.595 { 00:13:59.595 "base_bdev": "BaseBdev1", 00:13:59.595 "raid_bdev": "raid_bdev1", 00:13:59.595 "method": "bdev_raid_add_base_bdev", 00:13:59.595 "req_id": 1 00:13:59.595 } 00:13:59.595 Got JSON-RPC error response 00:13:59.595 response: 00:13:59.595 { 00:13:59.595 "code": -22, 00:13:59.595 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:59.595 } 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.595 08:51:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.534 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.794 "name": "raid_bdev1", 00:14:00.794 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:14:00.794 "strip_size_kb": 0, 00:14:00.794 "state": "online", 00:14:00.794 "raid_level": "raid1", 00:14:00.794 "superblock": true, 00:14:00.794 "num_base_bdevs": 4, 00:14:00.794 "num_base_bdevs_discovered": 2, 00:14:00.794 "num_base_bdevs_operational": 2, 00:14:00.794 "base_bdevs_list": [ 00:14:00.794 { 00:14:00.794 "name": null, 00:14:00.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.794 "is_configured": false, 00:14:00.794 "data_offset": 0, 00:14:00.794 "data_size": 63488 00:14:00.794 }, 00:14:00.794 { 00:14:00.794 "name": null, 00:14:00.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.794 "is_configured": false, 00:14:00.794 "data_offset": 2048, 00:14:00.794 "data_size": 63488 00:14:00.794 }, 00:14:00.794 { 00:14:00.794 "name": "BaseBdev3", 00:14:00.794 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:14:00.794 "is_configured": true, 00:14:00.794 "data_offset": 2048, 00:14:00.794 "data_size": 63488 00:14:00.794 }, 00:14:00.794 { 00:14:00.794 "name": "BaseBdev4", 00:14:00.794 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:14:00.794 "is_configured": true, 00:14:00.794 "data_offset": 2048, 00:14:00.794 "data_size": 63488 00:14:00.794 } 00:14:00.794 ] 00:14:00.794 }' 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.794 08:51:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.054 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.314 "name": "raid_bdev1", 00:14:01.314 "uuid": "7cf219fd-3c54-4d58-aa1c-f72da46c7e13", 00:14:01.314 "strip_size_kb": 0, 00:14:01.314 "state": "online", 00:14:01.314 "raid_level": "raid1", 00:14:01.314 "superblock": true, 00:14:01.314 "num_base_bdevs": 4, 00:14:01.314 "num_base_bdevs_discovered": 2, 00:14:01.314 "num_base_bdevs_operational": 2, 00:14:01.314 "base_bdevs_list": [ 00:14:01.314 { 00:14:01.314 "name": null, 00:14:01.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.314 "is_configured": false, 00:14:01.314 "data_offset": 0, 00:14:01.314 "data_size": 63488 00:14:01.314 }, 00:14:01.314 { 00:14:01.314 "name": null, 00:14:01.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.314 "is_configured": false, 00:14:01.314 "data_offset": 2048, 00:14:01.314 "data_size": 63488 00:14:01.314 }, 00:14:01.314 { 00:14:01.314 "name": "BaseBdev3", 00:14:01.314 "uuid": "705d321c-d44b-507a-aebe-bd2ce4ff251b", 00:14:01.314 "is_configured": true, 00:14:01.314 "data_offset": 2048, 00:14:01.314 "data_size": 63488 00:14:01.314 }, 00:14:01.314 { 00:14:01.314 "name": "BaseBdev4", 00:14:01.314 "uuid": "253798d3-1a52-517e-838a-ed0320f21163", 00:14:01.314 "is_configured": true, 00:14:01.314 "data_offset": 2048, 00:14:01.314 "data_size": 63488 00:14:01.314 } 00:14:01.314 ] 00:14:01.314 }' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77971 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77971 ']' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 77971 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77971 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77971' 00:14:01.314 killing process with pid 77971 00:14:01.314 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 77971 00:14:01.315 Received shutdown signal, test time was about 60.000000 seconds 00:14:01.315 00:14:01.315 Latency(us) 00:14:01.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.315 =================================================================================================================== 00:14:01.315 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:01.315 [2024-09-28 08:51:39.203580] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.315 08:51:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 77971 00:14:01.315 [2024-09-28 08:51:39.203735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.315 [2024-09-28 08:51:39.203812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.315 [2024-09-28 08:51:39.203821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:01.884 [2024-09-28 08:51:39.713145] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:03.264 00:14:03.264 real 0m25.583s 00:14:03.264 user 0m30.358s 00:14:03.264 sys 0m4.276s 00:14:03.264 ************************************ 00:14:03.264 END TEST raid_rebuild_test_sb 00:14:03.264 ************************************ 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 08:51:41 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:03.264 08:51:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:03.264 08:51:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.264 08:51:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 ************************************ 00:14:03.264 START TEST raid_rebuild_test_io 00:14:03.264 ************************************ 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.264 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78732 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78732 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78732 ']' 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.265 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.265 [2024-09-28 08:51:41.194764] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:03.265 [2024-09-28 08:51:41.194944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.265 Zero copy mechanism will not be used. 00:14:03.265 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78732 ] 00:14:03.524 [2024-09-28 08:51:41.356800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.784 [2024-09-28 08:51:41.554632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.784 [2024-09-28 08:51:41.753134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.784 [2024-09-28 08:51:41.753247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.044 08:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 BaseBdev1_malloc 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 [2024-09-28 08:51:42.046123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:04.304 [2024-09-28 08:51:42.046213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.304 [2024-09-28 08:51:42.046240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.304 [2024-09-28 08:51:42.046256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.304 [2024-09-28 08:51:42.048417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.304 [2024-09-28 08:51:42.048476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.304 BaseBdev1 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 BaseBdev2_malloc 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 [2024-09-28 08:51:42.129948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:04.304 [2024-09-28 08:51:42.130016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.304 [2024-09-28 08:51:42.130038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.304 [2024-09-28 08:51:42.130053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.304 [2024-09-28 08:51:42.132136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.304 [2024-09-28 08:51:42.132278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.304 BaseBdev2 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 BaseBdev3_malloc 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.304 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.304 [2024-09-28 08:51:42.184730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:04.304 [2024-09-28 08:51:42.184794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.304 [2024-09-28 08:51:42.184818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.304 [2024-09-28 08:51:42.184830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.304 [2024-09-28 08:51:42.186834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.305 [2024-09-28 08:51:42.186961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.305 BaseBdev3 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.305 BaseBdev4_malloc 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.305 [2024-09-28 08:51:42.240220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:04.305 [2024-09-28 08:51:42.240279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.305 [2024-09-28 08:51:42.240298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:04.305 [2024-09-28 08:51:42.240311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.305 [2024-09-28 08:51:42.242289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.305 [2024-09-28 08:51:42.242420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.305 BaseBdev4 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.305 spare_malloc 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.305 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.565 spare_delay 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.565 [2024-09-28 08:51:42.307987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.565 [2024-09-28 08:51:42.308051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.565 [2024-09-28 08:51:42.308072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:04.565 [2024-09-28 08:51:42.308084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.565 [2024-09-28 08:51:42.310086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.565 [2024-09-28 08:51:42.310134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.565 spare 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.565 [2024-09-28 08:51:42.320025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.565 [2024-09-28 08:51:42.321818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.565 [2024-09-28 08:51:42.321886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.565 [2024-09-28 08:51:42.321941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.565 [2024-09-28 08:51:42.322018] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:04.565 [2024-09-28 08:51:42.322030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:04.565 [2024-09-28 08:51:42.322272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:04.565 [2024-09-28 08:51:42.322444] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:04.565 [2024-09-28 08:51:42.322455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:04.565 [2024-09-28 08:51:42.322597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.565 "name": "raid_bdev1", 00:14:04.565 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:04.565 "strip_size_kb": 0, 00:14:04.565 "state": "online", 00:14:04.565 "raid_level": "raid1", 00:14:04.565 "superblock": false, 00:14:04.565 "num_base_bdevs": 4, 00:14:04.565 "num_base_bdevs_discovered": 4, 00:14:04.565 "num_base_bdevs_operational": 4, 00:14:04.565 "base_bdevs_list": [ 00:14:04.565 { 00:14:04.565 "name": "BaseBdev1", 00:14:04.565 "uuid": "8ee527c9-dca5-5b2e-be62-cbadd6444572", 00:14:04.565 "is_configured": true, 00:14:04.565 "data_offset": 0, 00:14:04.565 "data_size": 65536 00:14:04.565 }, 00:14:04.565 { 00:14:04.565 "name": "BaseBdev2", 00:14:04.565 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:04.565 "is_configured": true, 00:14:04.565 "data_offset": 0, 00:14:04.565 "data_size": 65536 00:14:04.565 }, 00:14:04.565 { 00:14:04.565 "name": "BaseBdev3", 00:14:04.565 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:04.565 "is_configured": true, 00:14:04.565 "data_offset": 0, 00:14:04.565 "data_size": 65536 00:14:04.565 }, 00:14:04.565 { 00:14:04.565 "name": "BaseBdev4", 00:14:04.565 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:04.565 "is_configured": true, 00:14:04.565 "data_offset": 0, 00:14:04.565 "data_size": 65536 00:14:04.565 } 00:14:04.565 ] 00:14:04.565 }' 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.565 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.825 [2024-09-28 08:51:42.684055] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.825 [2024-09-28 08:51:42.779672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.825 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.085 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.085 "name": "raid_bdev1", 00:14:05.085 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:05.085 "strip_size_kb": 0, 00:14:05.085 "state": "online", 00:14:05.085 "raid_level": "raid1", 00:14:05.085 "superblock": false, 00:14:05.085 "num_base_bdevs": 4, 00:14:05.085 "num_base_bdevs_discovered": 3, 00:14:05.085 "num_base_bdevs_operational": 3, 00:14:05.085 "base_bdevs_list": [ 00:14:05.085 { 00:14:05.085 "name": null, 00:14:05.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.085 "is_configured": false, 00:14:05.085 "data_offset": 0, 00:14:05.085 "data_size": 65536 00:14:05.085 }, 00:14:05.085 { 00:14:05.085 "name": "BaseBdev2", 00:14:05.085 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:05.085 "is_configured": true, 00:14:05.085 "data_offset": 0, 00:14:05.085 "data_size": 65536 00:14:05.085 }, 00:14:05.085 { 00:14:05.085 "name": "BaseBdev3", 00:14:05.085 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:05.085 "is_configured": true, 00:14:05.085 "data_offset": 0, 00:14:05.085 "data_size": 65536 00:14:05.085 }, 00:14:05.085 { 00:14:05.085 "name": "BaseBdev4", 00:14:05.085 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:05.085 "is_configured": true, 00:14:05.085 "data_offset": 0, 00:14:05.085 "data_size": 65536 00:14:05.085 } 00:14:05.085 ] 00:14:05.085 }' 00:14:05.085 08:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.085 08:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.085 [2024-09-28 08:51:42.879390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:05.085 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.085 Zero copy mechanism will not be used. 00:14:05.085 Running I/O for 60 seconds... 00:14:05.344 08:51:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.344 08:51:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.344 08:51:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.344 [2024-09-28 08:51:43.281878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.344 08:51:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.344 08:51:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:05.604 [2024-09-28 08:51:43.344464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:05.604 [2024-09-28 08:51:43.346480] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.604 [2024-09-28 08:51:43.461346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.604 [2024-09-28 08:51:43.462051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.604 [2024-09-28 08:51:43.572376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.604 [2024-09-28 08:51:43.572728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.173 176.00 IOPS, 528.00 MiB/s [2024-09-28 08:51:43.907402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:06.173 [2024-09-28 08:51:44.119422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:06.173 [2024-09-28 08:51:44.119799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.433 "name": "raid_bdev1", 00:14:06.433 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:06.433 "strip_size_kb": 0, 00:14:06.433 "state": "online", 00:14:06.433 "raid_level": "raid1", 00:14:06.433 "superblock": false, 00:14:06.433 "num_base_bdevs": 4, 00:14:06.433 "num_base_bdevs_discovered": 4, 00:14:06.433 "num_base_bdevs_operational": 4, 00:14:06.433 "process": { 00:14:06.433 "type": "rebuild", 00:14:06.433 "target": "spare", 00:14:06.433 "progress": { 00:14:06.433 "blocks": 12288, 00:14:06.433 "percent": 18 00:14:06.433 } 00:14:06.433 }, 00:14:06.433 "base_bdevs_list": [ 00:14:06.433 { 00:14:06.433 "name": "spare", 00:14:06.433 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 }, 00:14:06.433 { 00:14:06.433 "name": "BaseBdev2", 00:14:06.433 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 }, 00:14:06.433 { 00:14:06.433 "name": "BaseBdev3", 00:14:06.433 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 }, 00:14:06.433 { 00:14:06.433 "name": "BaseBdev4", 00:14:06.433 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 } 00:14:06.433 ] 00:14:06.433 }' 00:14:06.433 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.433 [2024-09-28 08:51:44.386573] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.693 [2024-09-28 08:51:44.457276] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.693 [2024-09-28 08:51:44.494429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:06.693 [2024-09-28 08:51:44.601927] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.693 [2024-09-28 08:51:44.611795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.693 [2024-09-28 08:51:44.611857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.693 [2024-09-28 08:51:44.611874] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.693 [2024-09-28 08:51:44.633364] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.693 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.953 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.953 "name": "raid_bdev1", 00:14:06.953 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:06.953 "strip_size_kb": 0, 00:14:06.953 "state": "online", 00:14:06.953 "raid_level": "raid1", 00:14:06.953 "superblock": false, 00:14:06.953 "num_base_bdevs": 4, 00:14:06.953 "num_base_bdevs_discovered": 3, 00:14:06.953 "num_base_bdevs_operational": 3, 00:14:06.953 "base_bdevs_list": [ 00:14:06.953 { 00:14:06.953 "name": null, 00:14:06.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.953 "is_configured": false, 00:14:06.954 "data_offset": 0, 00:14:06.954 "data_size": 65536 00:14:06.954 }, 00:14:06.954 { 00:14:06.954 "name": "BaseBdev2", 00:14:06.954 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:06.954 "is_configured": true, 00:14:06.954 "data_offset": 0, 00:14:06.954 "data_size": 65536 00:14:06.954 }, 00:14:06.954 { 00:14:06.954 "name": "BaseBdev3", 00:14:06.954 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:06.954 "is_configured": true, 00:14:06.954 "data_offset": 0, 00:14:06.954 "data_size": 65536 00:14:06.954 }, 00:14:06.954 { 00:14:06.954 "name": "BaseBdev4", 00:14:06.954 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:06.954 "is_configured": true, 00:14:06.954 "data_offset": 0, 00:14:06.954 "data_size": 65536 00:14:06.954 } 00:14:06.954 ] 00:14:06.954 }' 00:14:06.954 08:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.954 08:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.213 146.50 IOPS, 439.50 MiB/s 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.213 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.213 "name": "raid_bdev1", 00:14:07.213 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:07.213 "strip_size_kb": 0, 00:14:07.213 "state": "online", 00:14:07.213 "raid_level": "raid1", 00:14:07.213 "superblock": false, 00:14:07.213 "num_base_bdevs": 4, 00:14:07.213 "num_base_bdevs_discovered": 3, 00:14:07.213 "num_base_bdevs_operational": 3, 00:14:07.213 "base_bdevs_list": [ 00:14:07.213 { 00:14:07.213 "name": null, 00:14:07.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.213 "is_configured": false, 00:14:07.213 "data_offset": 0, 00:14:07.213 "data_size": 65536 00:14:07.213 }, 00:14:07.213 { 00:14:07.213 "name": "BaseBdev2", 00:14:07.213 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:07.213 "is_configured": true, 00:14:07.213 "data_offset": 0, 00:14:07.213 "data_size": 65536 00:14:07.213 }, 00:14:07.213 { 00:14:07.213 "name": "BaseBdev3", 00:14:07.213 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:07.214 "is_configured": true, 00:14:07.214 "data_offset": 0, 00:14:07.214 "data_size": 65536 00:14:07.214 }, 00:14:07.214 { 00:14:07.214 "name": "BaseBdev4", 00:14:07.214 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:07.214 "is_configured": true, 00:14:07.214 "data_offset": 0, 00:14:07.214 "data_size": 65536 00:14:07.214 } 00:14:07.214 ] 00:14:07.214 }' 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.214 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.473 [2024-09-28 08:51:45.208911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.473 08:51:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.473 08:51:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:07.473 [2024-09-28 08:51:45.276846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:07.473 [2024-09-28 08:51:45.278926] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.473 [2024-09-28 08:51:45.392255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.473 [2024-09-28 08:51:45.393612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.733 [2024-09-28 08:51:45.609195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:07.733 [2024-09-28 08:51:45.609668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.252 151.33 IOPS, 454.00 MiB/s [2024-09-28 08:51:46.074147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.252 [2024-09-28 08:51:46.074967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.544 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.544 "name": "raid_bdev1", 00:14:08.544 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:08.544 "strip_size_kb": 0, 00:14:08.544 "state": "online", 00:14:08.544 "raid_level": "raid1", 00:14:08.544 "superblock": false, 00:14:08.544 "num_base_bdevs": 4, 00:14:08.544 "num_base_bdevs_discovered": 4, 00:14:08.544 "num_base_bdevs_operational": 4, 00:14:08.544 "process": { 00:14:08.544 "type": "rebuild", 00:14:08.544 "target": "spare", 00:14:08.544 "progress": { 00:14:08.544 "blocks": 10240, 00:14:08.544 "percent": 15 00:14:08.544 } 00:14:08.544 }, 00:14:08.544 "base_bdevs_list": [ 00:14:08.544 { 00:14:08.544 "name": "spare", 00:14:08.544 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:08.544 "is_configured": true, 00:14:08.544 "data_offset": 0, 00:14:08.544 "data_size": 65536 00:14:08.544 }, 00:14:08.544 { 00:14:08.544 "name": "BaseBdev2", 00:14:08.544 "uuid": "d1a60b34-7cf1-5b90-a8f8-5f7200a7e2eb", 00:14:08.544 "is_configured": true, 00:14:08.544 "data_offset": 0, 00:14:08.544 "data_size": 65536 00:14:08.544 }, 00:14:08.544 { 00:14:08.544 "name": "BaseBdev3", 00:14:08.544 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:08.544 "is_configured": true, 00:14:08.544 "data_offset": 0, 00:14:08.544 "data_size": 65536 00:14:08.544 }, 00:14:08.544 { 00:14:08.544 "name": "BaseBdev4", 00:14:08.544 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:08.544 "is_configured": true, 00:14:08.544 "data_offset": 0, 00:14:08.544 "data_size": 65536 00:14:08.545 } 00:14:08.545 ] 00:14:08.545 }' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.545 [2024-09-28 08:51:46.394845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.545 [2024-09-28 08:51:46.414640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:08.545 [2024-09-28 08:51:46.415960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:08.545 [2024-09-28 08:51:46.518305] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:08.545 [2024-09-28 08:51:46.518392] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:08.545 [2024-09-28 08:51:46.524654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.545 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.805 "name": "raid_bdev1", 00:14:08.805 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:08.805 "strip_size_kb": 0, 00:14:08.805 "state": "online", 00:14:08.805 "raid_level": "raid1", 00:14:08.805 "superblock": false, 00:14:08.805 "num_base_bdevs": 4, 00:14:08.805 "num_base_bdevs_discovered": 3, 00:14:08.805 "num_base_bdevs_operational": 3, 00:14:08.805 "process": { 00:14:08.805 "type": "rebuild", 00:14:08.805 "target": "spare", 00:14:08.805 "progress": { 00:14:08.805 "blocks": 14336, 00:14:08.805 "percent": 21 00:14:08.805 } 00:14:08.805 }, 00:14:08.805 "base_bdevs_list": [ 00:14:08.805 { 00:14:08.805 "name": "spare", 00:14:08.805 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": null, 00:14:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.805 "is_configured": false, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev3", 00:14:08.805 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev4", 00:14:08.805 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 } 00:14:08.805 ] 00:14:08.805 }' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.805 "name": "raid_bdev1", 00:14:08.805 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:08.805 "strip_size_kb": 0, 00:14:08.805 "state": "online", 00:14:08.805 "raid_level": "raid1", 00:14:08.805 "superblock": false, 00:14:08.805 "num_base_bdevs": 4, 00:14:08.805 "num_base_bdevs_discovered": 3, 00:14:08.805 "num_base_bdevs_operational": 3, 00:14:08.805 "process": { 00:14:08.805 "type": "rebuild", 00:14:08.805 "target": "spare", 00:14:08.805 "progress": { 00:14:08.805 "blocks": 14336, 00:14:08.805 "percent": 21 00:14:08.805 } 00:14:08.805 }, 00:14:08.805 "base_bdevs_list": [ 00:14:08.805 { 00:14:08.805 "name": "spare", 00:14:08.805 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": null, 00:14:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.805 "is_configured": false, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev3", 00:14:08.805 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 }, 00:14:08.805 { 00:14:08.805 "name": "BaseBdev4", 00:14:08.805 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:08.805 "is_configured": true, 00:14:08.805 "data_offset": 0, 00:14:08.805 "data_size": 65536 00:14:08.805 } 00:14:08.805 ] 00:14:08.805 }' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.805 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.064 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.064 08:51:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.703 132.75 IOPS, 398.25 MiB/s [2024-09-28 08:51:47.325506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:09.703 [2024-09-28 08:51:47.548429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.963 "name": "raid_bdev1", 00:14:09.963 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:09.963 "strip_size_kb": 0, 00:14:09.963 "state": "online", 00:14:09.963 "raid_level": "raid1", 00:14:09.963 "superblock": false, 00:14:09.963 "num_base_bdevs": 4, 00:14:09.963 "num_base_bdevs_discovered": 3, 00:14:09.963 "num_base_bdevs_operational": 3, 00:14:09.963 "process": { 00:14:09.963 "type": "rebuild", 00:14:09.963 "target": "spare", 00:14:09.963 "progress": { 00:14:09.963 "blocks": 30720, 00:14:09.963 "percent": 46 00:14:09.963 } 00:14:09.963 }, 00:14:09.963 "base_bdevs_list": [ 00:14:09.963 { 00:14:09.963 "name": "spare", 00:14:09.963 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:09.963 "is_configured": true, 00:14:09.963 "data_offset": 0, 00:14:09.963 "data_size": 65536 00:14:09.963 }, 00:14:09.963 { 00:14:09.963 "name": null, 00:14:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.963 "is_configured": false, 00:14:09.963 "data_offset": 0, 00:14:09.963 "data_size": 65536 00:14:09.963 }, 00:14:09.963 { 00:14:09.963 "name": "BaseBdev3", 00:14:09.963 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:09.963 "is_configured": true, 00:14:09.963 "data_offset": 0, 00:14:09.963 "data_size": 65536 00:14:09.963 }, 00:14:09.963 { 00:14:09.963 "name": "BaseBdev4", 00:14:09.963 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:09.963 "is_configured": true, 00:14:09.963 "data_offset": 0, 00:14:09.963 "data_size": 65536 00:14:09.963 } 00:14:09.963 ] 00:14:09.963 }' 00:14:09.963 117.20 IOPS, 351.60 MiB/s 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.963 [2024-09-28 08:51:47.896522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.963 08:51:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.223 [2024-09-28 08:51:48.010693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:10.483 [2024-09-28 08:51:48.464098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:11.051 [2024-09-28 08:51:48.800834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:11.051 105.17 IOPS, 315.50 MiB/s [2024-09-28 08:51:48.907422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.051 "name": "raid_bdev1", 00:14:11.051 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:11.051 "strip_size_kb": 0, 00:14:11.051 "state": "online", 00:14:11.051 "raid_level": "raid1", 00:14:11.051 "superblock": false, 00:14:11.051 "num_base_bdevs": 4, 00:14:11.051 "num_base_bdevs_discovered": 3, 00:14:11.051 "num_base_bdevs_operational": 3, 00:14:11.051 "process": { 00:14:11.051 "type": "rebuild", 00:14:11.051 "target": "spare", 00:14:11.051 "progress": { 00:14:11.051 "blocks": 47104, 00:14:11.051 "percent": 71 00:14:11.051 } 00:14:11.051 }, 00:14:11.051 "base_bdevs_list": [ 00:14:11.051 { 00:14:11.051 "name": "spare", 00:14:11.051 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:11.051 "is_configured": true, 00:14:11.051 "data_offset": 0, 00:14:11.051 "data_size": 65536 00:14:11.051 }, 00:14:11.051 { 00:14:11.051 "name": null, 00:14:11.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.051 "is_configured": false, 00:14:11.051 "data_offset": 0, 00:14:11.051 "data_size": 65536 00:14:11.051 }, 00:14:11.051 { 00:14:11.051 "name": "BaseBdev3", 00:14:11.051 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:11.051 "is_configured": true, 00:14:11.051 "data_offset": 0, 00:14:11.051 "data_size": 65536 00:14:11.051 }, 00:14:11.051 { 00:14:11.051 "name": "BaseBdev4", 00:14:11.051 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:11.051 "is_configured": true, 00:14:11.051 "data_offset": 0, 00:14:11.051 "data_size": 65536 00:14:11.051 } 00:14:11.051 ] 00:14:11.051 }' 00:14:11.051 08:51:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.051 08:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.051 08:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.311 08:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.311 08:51:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.311 [2024-09-28 08:51:49.250367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:11.881 [2024-09-28 08:51:49.798463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:12.141 94.86 IOPS, 284.57 MiB/s 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.141 08:51:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.401 "name": "raid_bdev1", 00:14:12.401 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:12.401 "strip_size_kb": 0, 00:14:12.401 "state": "online", 00:14:12.401 "raid_level": "raid1", 00:14:12.401 "superblock": false, 00:14:12.401 "num_base_bdevs": 4, 00:14:12.401 "num_base_bdevs_discovered": 3, 00:14:12.401 "num_base_bdevs_operational": 3, 00:14:12.401 "process": { 00:14:12.401 "type": "rebuild", 00:14:12.401 "target": "spare", 00:14:12.401 "progress": { 00:14:12.401 "blocks": 61440, 00:14:12.401 "percent": 93 00:14:12.401 } 00:14:12.401 }, 00:14:12.401 "base_bdevs_list": [ 00:14:12.401 { 00:14:12.401 "name": "spare", 00:14:12.401 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:12.401 "is_configured": true, 00:14:12.401 "data_offset": 0, 00:14:12.401 "data_size": 65536 00:14:12.401 }, 00:14:12.401 { 00:14:12.401 "name": null, 00:14:12.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.401 "is_configured": false, 00:14:12.401 "data_offset": 0, 00:14:12.401 "data_size": 65536 00:14:12.401 }, 00:14:12.401 { 00:14:12.401 "name": "BaseBdev3", 00:14:12.401 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:12.401 "is_configured": true, 00:14:12.401 "data_offset": 0, 00:14:12.401 "data_size": 65536 00:14:12.401 }, 00:14:12.401 { 00:14:12.401 "name": "BaseBdev4", 00:14:12.401 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:12.401 "is_configured": true, 00:14:12.401 "data_offset": 0, 00:14:12.401 "data_size": 65536 00:14:12.401 } 00:14:12.401 ] 00:14:12.401 }' 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.401 [2024-09-28 08:51:50.238953] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.401 08:51:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.401 [2024-09-28 08:51:50.343720] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:12.401 [2024-09-28 08:51:50.347517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.542 86.88 IOPS, 260.62 MiB/s 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.542 "name": "raid_bdev1", 00:14:13.542 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:13.542 "strip_size_kb": 0, 00:14:13.542 "state": "online", 00:14:13.542 "raid_level": "raid1", 00:14:13.542 "superblock": false, 00:14:13.542 "num_base_bdevs": 4, 00:14:13.542 "num_base_bdevs_discovered": 3, 00:14:13.542 "num_base_bdevs_operational": 3, 00:14:13.542 "base_bdevs_list": [ 00:14:13.542 { 00:14:13.542 "name": "spare", 00:14:13.542 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": null, 00:14:13.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.542 "is_configured": false, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": "BaseBdev3", 00:14:13.542 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": "BaseBdev4", 00:14:13.542 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 } 00:14:13.542 ] 00:14:13.542 }' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.542 "name": "raid_bdev1", 00:14:13.542 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:13.542 "strip_size_kb": 0, 00:14:13.542 "state": "online", 00:14:13.542 "raid_level": "raid1", 00:14:13.542 "superblock": false, 00:14:13.542 "num_base_bdevs": 4, 00:14:13.542 "num_base_bdevs_discovered": 3, 00:14:13.542 "num_base_bdevs_operational": 3, 00:14:13.542 "base_bdevs_list": [ 00:14:13.542 { 00:14:13.542 "name": "spare", 00:14:13.542 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": null, 00:14:13.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.542 "is_configured": false, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": "BaseBdev3", 00:14:13.542 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 }, 00:14:13.542 { 00:14:13.542 "name": "BaseBdev4", 00:14:13.542 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:13.542 "is_configured": true, 00:14:13.542 "data_offset": 0, 00:14:13.542 "data_size": 65536 00:14:13.542 } 00:14:13.542 ] 00:14:13.542 }' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.542 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.802 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.803 "name": "raid_bdev1", 00:14:13.803 "uuid": "3d989cbf-991f-4071-96a3-400833cacbd0", 00:14:13.803 "strip_size_kb": 0, 00:14:13.803 "state": "online", 00:14:13.803 "raid_level": "raid1", 00:14:13.803 "superblock": false, 00:14:13.803 "num_base_bdevs": 4, 00:14:13.803 "num_base_bdevs_discovered": 3, 00:14:13.803 "num_base_bdevs_operational": 3, 00:14:13.803 "base_bdevs_list": [ 00:14:13.803 { 00:14:13.803 "name": "spare", 00:14:13.803 "uuid": "d23f64d6-2156-569b-ab99-ffbd3609fcda", 00:14:13.803 "is_configured": true, 00:14:13.803 "data_offset": 0, 00:14:13.803 "data_size": 65536 00:14:13.803 }, 00:14:13.803 { 00:14:13.803 "name": null, 00:14:13.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.803 "is_configured": false, 00:14:13.803 "data_offset": 0, 00:14:13.803 "data_size": 65536 00:14:13.803 }, 00:14:13.803 { 00:14:13.803 "name": "BaseBdev3", 00:14:13.803 "uuid": "d74b2be2-54ca-51ef-aa7a-ff81ab0c34f8", 00:14:13.803 "is_configured": true, 00:14:13.803 "data_offset": 0, 00:14:13.803 "data_size": 65536 00:14:13.803 }, 00:14:13.803 { 00:14:13.803 "name": "BaseBdev4", 00:14:13.803 "uuid": "1215d473-a5e9-5032-8f27-10195f66292b", 00:14:13.803 "is_configured": true, 00:14:13.803 "data_offset": 0, 00:14:13.803 "data_size": 65536 00:14:13.803 } 00:14:13.803 ] 00:14:13.803 }' 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.803 08:51:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.063 80.67 IOPS, 242.00 MiB/s 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:14.063 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.063 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.063 [2024-09-28 08:51:52.012707] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.063 [2024-09-28 08:51:52.012746] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.323 00:14:14.323 Latency(us) 00:14:14.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.323 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:14.323 raid_bdev1 : 9.20 79.48 238.45 0.00 0.00 18562.39 314.80 114931.26 00:14:14.323 =================================================================================================================== 00:14:14.323 Total : 79.48 238.45 0.00 0.00 18562.39 314.80 114931.26 00:14:14.323 [2024-09-28 08:51:52.080396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.323 { 00:14:14.323 "results": [ 00:14:14.323 { 00:14:14.323 "job": "raid_bdev1", 00:14:14.323 "core_mask": "0x1", 00:14:14.323 "workload": "randrw", 00:14:14.323 "percentage": 50, 00:14:14.323 "status": "finished", 00:14:14.323 "queue_depth": 2, 00:14:14.323 "io_size": 3145728, 00:14:14.323 "runtime": 9.19672, 00:14:14.323 "iops": 79.48485981958785, 00:14:14.323 "mibps": 238.45457945876356, 00:14:14.323 "io_failed": 0, 00:14:14.323 "io_timeout": 0, 00:14:14.323 "avg_latency_us": 18562.38766539824, 00:14:14.323 "min_latency_us": 314.80174672489085, 00:14:14.323 "max_latency_us": 114931.2558951965 00:14:14.323 } 00:14:14.323 ], 00:14:14.323 "core_count": 1 00:14:14.323 } 00:14:14.323 [2024-09-28 08:51:52.080492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.323 [2024-09-28 08:51:52.080592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.323 [2024-09-28 08:51:52.080621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.323 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:14.607 /dev/nbd0 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.607 1+0 records in 00:14:14.607 1+0 records out 00:14:14.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371924 s, 11.0 MB/s 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.607 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:14.607 /dev/nbd1 00:14:14.877 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:14.877 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:14.877 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.878 1+0 records in 00:14:14.878 1+0 records out 00:14:14.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005025 s, 8.2 MB/s 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.878 08:51:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.138 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:15.398 /dev/nbd1 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.398 1+0 records in 00:14:15.398 1+0 records out 00:14:15.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402585 s, 10.2 MB/s 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.398 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.399 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.658 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78732 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78732 ']' 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78732 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78732 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78732' 00:14:15.918 killing process with pid 78732 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78732 00:14:15.918 Received shutdown signal, test time was about 10.920944 seconds 00:14:15.918 00:14:15.918 Latency(us) 00:14:15.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.918 =================================================================================================================== 00:14:15.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.918 [2024-09-28 08:51:53.781491] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.918 08:51:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78732 00:14:16.488 [2024-09-28 08:51:54.177224] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.427 08:51:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:17.427 00:14:17.427 real 0m14.317s 00:14:17.427 user 0m17.708s 00:14:17.427 sys 0m1.895s 00:14:17.427 08:51:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.427 08:51:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.427 ************************************ 00:14:17.427 END TEST raid_rebuild_test_io 00:14:17.427 ************************************ 00:14:17.688 08:51:55 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:17.688 08:51:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:17.688 08:51:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.688 08:51:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.688 ************************************ 00:14:17.688 START TEST raid_rebuild_test_sb_io 00:14:17.688 ************************************ 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79156 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79156 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79156 ']' 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.688 08:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.688 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:17.688 Zero copy mechanism will not be used. 00:14:17.688 [2024-09-28 08:51:55.594167] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:17.688 [2024-09-28 08:51:55.594279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79156 ] 00:14:17.948 [2024-09-28 08:51:55.761524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.210 [2024-09-28 08:51:55.952383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.210 [2024-09-28 08:51:56.146346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.210 [2024-09-28 08:51:56.146412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.470 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.470 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:18.470 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.470 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.471 BaseBdev1_malloc 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.471 [2024-09-28 08:51:56.447638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:18.471 [2024-09-28 08:51:56.447743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.471 [2024-09-28 08:51:56.447769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:18.471 [2024-09-28 08:51:56.447787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.471 [2024-09-28 08:51:56.449814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.471 [2024-09-28 08:51:56.449855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:18.471 BaseBdev1 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.471 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.731 BaseBdev2_malloc 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.731 [2024-09-28 08:51:56.529724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:18.731 [2024-09-28 08:51:56.529790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.731 [2024-09-28 08:51:56.529825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:18.731 [2024-09-28 08:51:56.529837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.731 [2024-09-28 08:51:56.531962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.731 [2024-09-28 08:51:56.532023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:18.731 BaseBdev2 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.731 BaseBdev3_malloc 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.731 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.731 [2024-09-28 08:51:56.584586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:18.731 [2024-09-28 08:51:56.584640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.731 [2024-09-28 08:51:56.584687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:18.731 [2024-09-28 08:51:56.584700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.732 [2024-09-28 08:51:56.586740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.732 [2024-09-28 08:51:56.586779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:18.732 BaseBdev3 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 BaseBdev4_malloc 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 [2024-09-28 08:51:56.637476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:18.732 [2024-09-28 08:51:56.637534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.732 [2024-09-28 08:51:56.637569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:18.732 [2024-09-28 08:51:56.637582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.732 [2024-09-28 08:51:56.639676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.732 [2024-09-28 08:51:56.639720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:18.732 BaseBdev4 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 spare_malloc 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 spare_delay 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 [2024-09-28 08:51:56.703120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.732 [2024-09-28 08:51:56.703180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.732 [2024-09-28 08:51:56.703223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:18.732 [2024-09-28 08:51:56.703236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.732 [2024-09-28 08:51:56.705259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.732 [2024-09-28 08:51:56.705318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.732 spare 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.732 [2024-09-28 08:51:56.715158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.732 [2024-09-28 08:51:56.716929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.732 [2024-09-28 08:51:56.717001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.732 [2024-09-28 08:51:56.717057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.732 [2024-09-28 08:51:56.717249] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:18.732 [2024-09-28 08:51:56.717274] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:18.732 [2024-09-28 08:51:56.717542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.732 [2024-09-28 08:51:56.717740] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:18.732 [2024-09-28 08:51:56.717761] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:18.732 [2024-09-28 08:51:56.717914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.732 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.991 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.991 "name": "raid_bdev1", 00:14:18.991 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:18.991 "strip_size_kb": 0, 00:14:18.991 "state": "online", 00:14:18.991 "raid_level": "raid1", 00:14:18.991 "superblock": true, 00:14:18.991 "num_base_bdevs": 4, 00:14:18.991 "num_base_bdevs_discovered": 4, 00:14:18.991 "num_base_bdevs_operational": 4, 00:14:18.991 "base_bdevs_list": [ 00:14:18.991 { 00:14:18.991 "name": "BaseBdev1", 00:14:18.991 "uuid": "a8c5c21d-88f8-50ad-ad98-4fefd98d253b", 00:14:18.991 "is_configured": true, 00:14:18.991 "data_offset": 2048, 00:14:18.991 "data_size": 63488 00:14:18.991 }, 00:14:18.991 { 00:14:18.991 "name": "BaseBdev2", 00:14:18.991 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:18.992 "is_configured": true, 00:14:18.992 "data_offset": 2048, 00:14:18.992 "data_size": 63488 00:14:18.992 }, 00:14:18.992 { 00:14:18.992 "name": "BaseBdev3", 00:14:18.992 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:18.992 "is_configured": true, 00:14:18.992 "data_offset": 2048, 00:14:18.992 "data_size": 63488 00:14:18.992 }, 00:14:18.992 { 00:14:18.992 "name": "BaseBdev4", 00:14:18.992 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:18.992 "is_configured": true, 00:14:18.992 "data_offset": 2048, 00:14:18.992 "data_size": 63488 00:14:18.992 } 00:14:18.992 ] 00:14:18.992 }' 00:14:18.992 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.992 08:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 [2024-09-28 08:51:57.142675] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 [2024-09-28 08:51:57.234201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.253 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.514 "name": "raid_bdev1", 00:14:19.514 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:19.514 "strip_size_kb": 0, 00:14:19.514 "state": "online", 00:14:19.514 "raid_level": "raid1", 00:14:19.514 "superblock": true, 00:14:19.514 "num_base_bdevs": 4, 00:14:19.514 "num_base_bdevs_discovered": 3, 00:14:19.514 "num_base_bdevs_operational": 3, 00:14:19.514 "base_bdevs_list": [ 00:14:19.514 { 00:14:19.514 "name": null, 00:14:19.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.514 "is_configured": false, 00:14:19.514 "data_offset": 0, 00:14:19.514 "data_size": 63488 00:14:19.514 }, 00:14:19.514 { 00:14:19.514 "name": "BaseBdev2", 00:14:19.514 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:19.514 "is_configured": true, 00:14:19.514 "data_offset": 2048, 00:14:19.514 "data_size": 63488 00:14:19.514 }, 00:14:19.514 { 00:14:19.514 "name": "BaseBdev3", 00:14:19.514 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:19.514 "is_configured": true, 00:14:19.514 "data_offset": 2048, 00:14:19.514 "data_size": 63488 00:14:19.514 }, 00:14:19.514 { 00:14:19.514 "name": "BaseBdev4", 00:14:19.514 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:19.514 "is_configured": true, 00:14:19.514 "data_offset": 2048, 00:14:19.514 "data_size": 63488 00:14:19.514 } 00:14:19.514 ] 00:14:19.514 }' 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.514 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.514 [2024-09-28 08:51:57.309080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:19.514 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:19.514 Zero copy mechanism will not be used. 00:14:19.514 Running I/O for 60 seconds... 00:14:19.774 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.774 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.774 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.774 [2024-09-28 08:51:57.696634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.774 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.774 08:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:19.774 [2024-09-28 08:51:57.765575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:19.774 [2024-09-28 08:51:57.767552] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.035 [2024-09-28 08:51:57.888559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:20.035 [2024-09-28 08:51:57.889855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:20.294 [2024-09-28 08:51:58.120911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:20.294 [2024-09-28 08:51:58.121609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:20.554 167.00 IOPS, 501.00 MiB/s [2024-09-28 08:51:58.435148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.815 "name": "raid_bdev1", 00:14:20.815 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:20.815 "strip_size_kb": 0, 00:14:20.815 "state": "online", 00:14:20.815 "raid_level": "raid1", 00:14:20.815 "superblock": true, 00:14:20.815 "num_base_bdevs": 4, 00:14:20.815 "num_base_bdevs_discovered": 4, 00:14:20.815 "num_base_bdevs_operational": 4, 00:14:20.815 "process": { 00:14:20.815 "type": "rebuild", 00:14:20.815 "target": "spare", 00:14:20.815 "progress": { 00:14:20.815 "blocks": 12288, 00:14:20.815 "percent": 19 00:14:20.815 } 00:14:20.815 }, 00:14:20.815 "base_bdevs_list": [ 00:14:20.815 { 00:14:20.815 "name": "spare", 00:14:20.815 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:20.815 "is_configured": true, 00:14:20.815 "data_offset": 2048, 00:14:20.815 "data_size": 63488 00:14:20.815 }, 00:14:20.815 { 00:14:20.815 "name": "BaseBdev2", 00:14:20.815 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:20.815 "is_configured": true, 00:14:20.815 "data_offset": 2048, 00:14:20.815 "data_size": 63488 00:14:20.815 }, 00:14:20.815 { 00:14:20.815 "name": "BaseBdev3", 00:14:20.815 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:20.815 "is_configured": true, 00:14:20.815 "data_offset": 2048, 00:14:20.815 "data_size": 63488 00:14:20.815 }, 00:14:20.815 { 00:14:20.815 "name": "BaseBdev4", 00:14:20.815 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:20.815 "is_configured": true, 00:14:20.815 "data_offset": 2048, 00:14:20.815 "data_size": 63488 00:14:20.815 } 00:14:20.815 ] 00:14:20.815 }' 00:14:20.815 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.075 [2024-09-28 08:51:58.876737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.075 [2024-09-28 08:51:58.912513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:21.075 [2024-09-28 08:51:58.925685] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.075 [2024-09-28 08:51:58.935432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.075 [2024-09-28 08:51:58.935494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.075 [2024-09-28 08:51:58.935511] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.075 [2024-09-28 08:51:58.958517] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.075 08:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.075 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.075 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.075 "name": "raid_bdev1", 00:14:21.075 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:21.075 "strip_size_kb": 0, 00:14:21.075 "state": "online", 00:14:21.075 "raid_level": "raid1", 00:14:21.075 "superblock": true, 00:14:21.075 "num_base_bdevs": 4, 00:14:21.075 "num_base_bdevs_discovered": 3, 00:14:21.075 "num_base_bdevs_operational": 3, 00:14:21.075 "base_bdevs_list": [ 00:14:21.075 { 00:14:21.075 "name": null, 00:14:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.075 "is_configured": false, 00:14:21.075 "data_offset": 0, 00:14:21.075 "data_size": 63488 00:14:21.075 }, 00:14:21.075 { 00:14:21.075 "name": "BaseBdev2", 00:14:21.075 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:21.075 "is_configured": true, 00:14:21.075 "data_offset": 2048, 00:14:21.075 "data_size": 63488 00:14:21.075 }, 00:14:21.075 { 00:14:21.075 "name": "BaseBdev3", 00:14:21.075 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:21.075 "is_configured": true, 00:14:21.075 "data_offset": 2048, 00:14:21.075 "data_size": 63488 00:14:21.075 }, 00:14:21.075 { 00:14:21.075 "name": "BaseBdev4", 00:14:21.075 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:21.075 "is_configured": true, 00:14:21.075 "data_offset": 2048, 00:14:21.075 "data_size": 63488 00:14:21.075 } 00:14:21.075 ] 00:14:21.075 }' 00:14:21.075 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.075 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.595 168.00 IOPS, 504.00 MiB/s 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.595 "name": "raid_bdev1", 00:14:21.595 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:21.595 "strip_size_kb": 0, 00:14:21.595 "state": "online", 00:14:21.595 "raid_level": "raid1", 00:14:21.595 "superblock": true, 00:14:21.595 "num_base_bdevs": 4, 00:14:21.595 "num_base_bdevs_discovered": 3, 00:14:21.595 "num_base_bdevs_operational": 3, 00:14:21.595 "base_bdevs_list": [ 00:14:21.595 { 00:14:21.595 "name": null, 00:14:21.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.595 "is_configured": false, 00:14:21.595 "data_offset": 0, 00:14:21.595 "data_size": 63488 00:14:21.595 }, 00:14:21.595 { 00:14:21.595 "name": "BaseBdev2", 00:14:21.595 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:21.595 "is_configured": true, 00:14:21.595 "data_offset": 2048, 00:14:21.595 "data_size": 63488 00:14:21.595 }, 00:14:21.595 { 00:14:21.595 "name": "BaseBdev3", 00:14:21.595 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:21.595 "is_configured": true, 00:14:21.595 "data_offset": 2048, 00:14:21.595 "data_size": 63488 00:14:21.595 }, 00:14:21.595 { 00:14:21.595 "name": "BaseBdev4", 00:14:21.595 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:21.595 "is_configured": true, 00:14:21.595 "data_offset": 2048, 00:14:21.595 "data_size": 63488 00:14:21.595 } 00:14:21.595 ] 00:14:21.595 }' 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.595 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.854 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.854 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.854 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.854 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.854 [2024-09-28 08:51:59.620787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.855 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.855 08:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.855 [2024-09-28 08:51:59.678196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:21.855 [2024-09-28 08:51:59.680104] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.855 [2024-09-28 08:51:59.781785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:21.855 [2024-09-28 08:51:59.782286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:22.113 [2024-09-28 08:52:00.005318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:22.113 [2024-09-28 08:52:00.006163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:22.372 162.67 IOPS, 488.00 MiB/s [2024-09-28 08:52:00.351622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:22.632 [2024-09-28 08:52:00.480748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.892 "name": "raid_bdev1", 00:14:22.892 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:22.892 "strip_size_kb": 0, 00:14:22.892 "state": "online", 00:14:22.892 "raid_level": "raid1", 00:14:22.892 "superblock": true, 00:14:22.892 "num_base_bdevs": 4, 00:14:22.892 "num_base_bdevs_discovered": 4, 00:14:22.892 "num_base_bdevs_operational": 4, 00:14:22.892 "process": { 00:14:22.892 "type": "rebuild", 00:14:22.892 "target": "spare", 00:14:22.892 "progress": { 00:14:22.892 "blocks": 12288, 00:14:22.892 "percent": 19 00:14:22.892 } 00:14:22.892 }, 00:14:22.892 "base_bdevs_list": [ 00:14:22.892 { 00:14:22.892 "name": "spare", 00:14:22.892 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:22.892 "is_configured": true, 00:14:22.892 "data_offset": 2048, 00:14:22.892 "data_size": 63488 00:14:22.892 }, 00:14:22.892 { 00:14:22.892 "name": "BaseBdev2", 00:14:22.892 "uuid": "176df536-af0a-509e-8191-2956a5b99fec", 00:14:22.892 "is_configured": true, 00:14:22.892 "data_offset": 2048, 00:14:22.892 "data_size": 63488 00:14:22.892 }, 00:14:22.892 { 00:14:22.892 "name": "BaseBdev3", 00:14:22.892 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:22.892 "is_configured": true, 00:14:22.892 "data_offset": 2048, 00:14:22.892 "data_size": 63488 00:14:22.892 }, 00:14:22.892 { 00:14:22.892 "name": "BaseBdev4", 00:14:22.892 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:22.892 "is_configured": true, 00:14:22.892 "data_offset": 2048, 00:14:22.892 "data_size": 63488 00:14:22.892 } 00:14:22.892 ] 00:14:22.892 }' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.892 [2024-09-28 08:52:00.745257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:22.892 [2024-09-28 08:52:00.746610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:22.892 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.892 08:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.892 [2024-09-28 08:52:00.805265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.152 [2024-09-28 08:52:01.093293] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:23.152 [2024-09-28 08:52:01.093348] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.152 [2024-09-28 08:52:01.106943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:23.152 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.411 "name": "raid_bdev1", 00:14:23.411 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:23.411 "strip_size_kb": 0, 00:14:23.411 "state": "online", 00:14:23.411 "raid_level": "raid1", 00:14:23.411 "superblock": true, 00:14:23.411 "num_base_bdevs": 4, 00:14:23.411 "num_base_bdevs_discovered": 3, 00:14:23.411 "num_base_bdevs_operational": 3, 00:14:23.411 "process": { 00:14:23.411 "type": "rebuild", 00:14:23.411 "target": "spare", 00:14:23.411 "progress": { 00:14:23.411 "blocks": 16384, 00:14:23.411 "percent": 25 00:14:23.411 } 00:14:23.411 }, 00:14:23.411 "base_bdevs_list": [ 00:14:23.411 { 00:14:23.411 "name": "spare", 00:14:23.411 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:23.411 "is_configured": true, 00:14:23.411 "data_offset": 2048, 00:14:23.411 "data_size": 63488 00:14:23.411 }, 00:14:23.411 { 00:14:23.411 "name": null, 00:14:23.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.411 "is_configured": false, 00:14:23.411 "data_offset": 0, 00:14:23.411 "data_size": 63488 00:14:23.411 }, 00:14:23.411 { 00:14:23.411 "name": "BaseBdev3", 00:14:23.411 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:23.411 "is_configured": true, 00:14:23.411 "data_offset": 2048, 00:14:23.411 "data_size": 63488 00:14:23.411 }, 00:14:23.411 { 00:14:23.411 "name": "BaseBdev4", 00:14:23.411 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:23.411 "is_configured": true, 00:14:23.411 "data_offset": 2048, 00:14:23.411 "data_size": 63488 00:14:23.411 } 00:14:23.411 ] 00:14:23.411 }' 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:14:23.411 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.412 "name": "raid_bdev1", 00:14:23.412 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:23.412 "strip_size_kb": 0, 00:14:23.412 "state": "online", 00:14:23.412 "raid_level": "raid1", 00:14:23.412 "superblock": true, 00:14:23.412 "num_base_bdevs": 4, 00:14:23.412 "num_base_bdevs_discovered": 3, 00:14:23.412 "num_base_bdevs_operational": 3, 00:14:23.412 "process": { 00:14:23.412 "type": "rebuild", 00:14:23.412 "target": "spare", 00:14:23.412 "progress": { 00:14:23.412 "blocks": 16384, 00:14:23.412 "percent": 25 00:14:23.412 } 00:14:23.412 }, 00:14:23.412 "base_bdevs_list": [ 00:14:23.412 { 00:14:23.412 "name": "spare", 00:14:23.412 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 2048, 00:14:23.412 "data_size": 63488 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": null, 00:14:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.412 "is_configured": false, 00:14:23.412 "data_offset": 0, 00:14:23.412 "data_size": 63488 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": "BaseBdev3", 00:14:23.412 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 2048, 00:14:23.412 "data_size": 63488 00:14:23.412 }, 00:14:23.412 { 00:14:23.412 "name": "BaseBdev4", 00:14:23.412 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:23.412 "is_configured": true, 00:14:23.412 "data_offset": 2048, 00:14:23.412 "data_size": 63488 00:14:23.412 } 00:14:23.412 ] 00:14:23.412 }' 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.412 146.75 IOPS, 440.25 MiB/s 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.412 08:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.671 [2024-09-28 08:52:01.437982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:23.931 [2024-09-28 08:52:01.758476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:23.931 [2024-09-28 08:52:01.859864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:24.191 [2024-09-28 08:52:02.082818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:24.451 [2024-09-28 08:52:02.198690] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:24.451 131.00 IOPS, 393.00 MiB/s 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.451 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.710 "name": "raid_bdev1", 00:14:24.710 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:24.710 "strip_size_kb": 0, 00:14:24.710 "state": "online", 00:14:24.710 "raid_level": "raid1", 00:14:24.710 "superblock": true, 00:14:24.710 "num_base_bdevs": 4, 00:14:24.710 "num_base_bdevs_discovered": 3, 00:14:24.710 "num_base_bdevs_operational": 3, 00:14:24.710 "process": { 00:14:24.710 "type": "rebuild", 00:14:24.710 "target": "spare", 00:14:24.710 "progress": { 00:14:24.710 "blocks": 36864, 00:14:24.710 "percent": 58 00:14:24.710 } 00:14:24.710 }, 00:14:24.710 "base_bdevs_list": [ 00:14:24.710 { 00:14:24.710 "name": "spare", 00:14:24.710 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:24.710 "is_configured": true, 00:14:24.710 "data_offset": 2048, 00:14:24.710 "data_size": 63488 00:14:24.710 }, 00:14:24.710 { 00:14:24.710 "name": null, 00:14:24.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.710 "is_configured": false, 00:14:24.710 "data_offset": 0, 00:14:24.710 "data_size": 63488 00:14:24.710 }, 00:14:24.710 { 00:14:24.710 "name": "BaseBdev3", 00:14:24.710 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:24.710 "is_configured": true, 00:14:24.710 "data_offset": 2048, 00:14:24.710 "data_size": 63488 00:14:24.710 }, 00:14:24.710 { 00:14:24.710 "name": "BaseBdev4", 00:14:24.710 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:24.710 "is_configured": true, 00:14:24.710 "data_offset": 2048, 00:14:24.710 "data_size": 63488 00:14:24.710 } 00:14:24.710 ] 00:14:24.710 }' 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.710 [2024-09-28 08:52:02.522053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.710 08:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.970 [2024-09-28 08:52:02.851064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:25.539 115.83 IOPS, 347.50 MiB/s [2024-09-28 08:52:03.394118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.799 "name": "raid_bdev1", 00:14:25.799 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:25.799 "strip_size_kb": 0, 00:14:25.799 "state": "online", 00:14:25.799 "raid_level": "raid1", 00:14:25.799 "superblock": true, 00:14:25.799 "num_base_bdevs": 4, 00:14:25.799 "num_base_bdevs_discovered": 3, 00:14:25.799 "num_base_bdevs_operational": 3, 00:14:25.799 "process": { 00:14:25.799 "type": "rebuild", 00:14:25.799 "target": "spare", 00:14:25.799 "progress": { 00:14:25.799 "blocks": 55296, 00:14:25.799 "percent": 87 00:14:25.799 } 00:14:25.799 }, 00:14:25.799 "base_bdevs_list": [ 00:14:25.799 { 00:14:25.799 "name": "spare", 00:14:25.799 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:25.799 "is_configured": true, 00:14:25.799 "data_offset": 2048, 00:14:25.799 "data_size": 63488 00:14:25.799 }, 00:14:25.799 { 00:14:25.799 "name": null, 00:14:25.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.799 "is_configured": false, 00:14:25.799 "data_offset": 0, 00:14:25.799 "data_size": 63488 00:14:25.799 }, 00:14:25.799 { 00:14:25.799 "name": "BaseBdev3", 00:14:25.799 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:25.799 "is_configured": true, 00:14:25.799 "data_offset": 2048, 00:14:25.799 "data_size": 63488 00:14:25.799 }, 00:14:25.799 { 00:14:25.799 "name": "BaseBdev4", 00:14:25.799 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:25.799 "is_configured": true, 00:14:25.799 "data_offset": 2048, 00:14:25.799 "data_size": 63488 00:14:25.799 } 00:14:25.799 ] 00:14:25.799 }' 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.799 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.800 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.800 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.800 08:52:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.059 [2024-09-28 08:52:03.940351] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:26.059 [2024-09-28 08:52:04.040175] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.059 [2024-09-28 08:52:04.042021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.888 105.29 IOPS, 315.86 MiB/s 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.888 "name": "raid_bdev1", 00:14:26.888 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:26.888 "strip_size_kb": 0, 00:14:26.888 "state": "online", 00:14:26.888 "raid_level": "raid1", 00:14:26.888 "superblock": true, 00:14:26.888 "num_base_bdevs": 4, 00:14:26.888 "num_base_bdevs_discovered": 3, 00:14:26.888 "num_base_bdevs_operational": 3, 00:14:26.888 "base_bdevs_list": [ 00:14:26.888 { 00:14:26.888 "name": "spare", 00:14:26.888 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:26.888 "is_configured": true, 00:14:26.888 "data_offset": 2048, 00:14:26.888 "data_size": 63488 00:14:26.888 }, 00:14:26.888 { 00:14:26.888 "name": null, 00:14:26.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.888 "is_configured": false, 00:14:26.888 "data_offset": 0, 00:14:26.888 "data_size": 63488 00:14:26.888 }, 00:14:26.888 { 00:14:26.888 "name": "BaseBdev3", 00:14:26.888 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:26.888 "is_configured": true, 00:14:26.888 "data_offset": 2048, 00:14:26.888 "data_size": 63488 00:14:26.888 }, 00:14:26.888 { 00:14:26.888 "name": "BaseBdev4", 00:14:26.888 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:26.888 "is_configured": true, 00:14:26.888 "data_offset": 2048, 00:14:26.888 "data_size": 63488 00:14:26.888 } 00:14:26.888 ] 00:14:26.888 }' 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.888 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.149 "name": "raid_bdev1", 00:14:27.149 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:27.149 "strip_size_kb": 0, 00:14:27.149 "state": "online", 00:14:27.149 "raid_level": "raid1", 00:14:27.149 "superblock": true, 00:14:27.149 "num_base_bdevs": 4, 00:14:27.149 "num_base_bdevs_discovered": 3, 00:14:27.149 "num_base_bdevs_operational": 3, 00:14:27.149 "base_bdevs_list": [ 00:14:27.149 { 00:14:27.149 "name": "spare", 00:14:27.149 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": null, 00:14:27.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.149 "is_configured": false, 00:14:27.149 "data_offset": 0, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": "BaseBdev3", 00:14:27.149 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": "BaseBdev4", 00:14:27.149 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 }' 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.149 08:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.149 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.149 "name": "raid_bdev1", 00:14:27.149 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:27.149 "strip_size_kb": 0, 00:14:27.149 "state": "online", 00:14:27.149 "raid_level": "raid1", 00:14:27.149 "superblock": true, 00:14:27.149 "num_base_bdevs": 4, 00:14:27.149 "num_base_bdevs_discovered": 3, 00:14:27.149 "num_base_bdevs_operational": 3, 00:14:27.149 "base_bdevs_list": [ 00:14:27.149 { 00:14:27.149 "name": "spare", 00:14:27.149 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": null, 00:14:27.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.149 "is_configured": false, 00:14:27.149 "data_offset": 0, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": "BaseBdev3", 00:14:27.149 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 }, 00:14:27.149 { 00:14:27.149 "name": "BaseBdev4", 00:14:27.149 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:27.149 "is_configured": true, 00:14:27.149 "data_offset": 2048, 00:14:27.149 "data_size": 63488 00:14:27.149 } 00:14:27.149 ] 00:14:27.149 }' 00:14:27.149 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.149 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.668 95.38 IOPS, 286.12 MiB/s 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.668 [2024-09-28 08:52:05.434804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.668 [2024-09-28 08:52:05.434844] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.668 00:14:27.668 Latency(us) 00:14:27.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.668 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:27.668 raid_bdev1 : 8.25 93.37 280.10 0.00 0.00 15486.68 304.07 114473.36 00:14:27.668 =================================================================================================================== 00:14:27.668 Total : 93.37 280.10 0.00 0.00 15486.68 304.07 114473.36 00:14:27.668 [2024-09-28 08:52:05.561748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.668 [2024-09-28 08:52:05.561814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.668 [2024-09-28 08:52:05.561912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.668 [2024-09-28 08:52:05.561923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:27.668 { 00:14:27.668 "results": [ 00:14:27.668 { 00:14:27.668 "job": "raid_bdev1", 00:14:27.668 "core_mask": "0x1", 00:14:27.668 "workload": "randrw", 00:14:27.668 "percentage": 50, 00:14:27.668 "status": "finished", 00:14:27.668 "queue_depth": 2, 00:14:27.668 "io_size": 3145728, 00:14:27.668 "runtime": 8.24693, 00:14:27.668 "iops": 93.36807757553417, 00:14:27.668 "mibps": 280.1042327266025, 00:14:27.668 "io_failed": 0, 00:14:27.668 "io_timeout": 0, 00:14:27.668 "avg_latency_us": 15486.676808257245, 00:14:27.668 "min_latency_us": 304.0698689956332, 00:14:27.668 "max_latency_us": 114473.36244541485 00:14:27.668 } 00:14:27.668 ], 00:14:27.668 "core_count": 1 00:14:27.668 } 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:27.668 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.669 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:27.927 /dev/nbd0 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.927 1+0 records in 00:14:27.927 1+0 records out 00:14:27.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387099 s, 10.6 MB/s 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:27.927 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.928 08:52:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:28.187 /dev/nbd1 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.187 1+0 records in 00:14:28.187 1+0 records out 00:14:28.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412602 s, 9.9 MB/s 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.187 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.445 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:28.704 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.705 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:28.705 /dev/nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.964 1+0 records in 00:14:28.964 1+0 records out 00:14:28.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360889 s, 11.3 MB/s 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.964 08:52:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.224 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.483 [2024-09-28 08:52:07.267858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.483 [2024-09-28 08:52:07.267914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.483 [2024-09-28 08:52:07.267938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:29.483 [2024-09-28 08:52:07.267947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.483 [2024-09-28 08:52:07.270370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.483 [2024-09-28 08:52:07.270407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.483 [2024-09-28 08:52:07.270497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.483 [2024-09-28 08:52:07.270546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.483 [2024-09-28 08:52:07.270706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.483 [2024-09-28 08:52:07.270831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:29.483 spare 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.483 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.484 [2024-09-28 08:52:07.370741] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:29.484 [2024-09-28 08:52:07.370766] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:29.484 [2024-09-28 08:52:07.371051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:29.484 [2024-09-28 08:52:07.371237] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:29.484 [2024-09-28 08:52:07.371273] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:29.484 [2024-09-28 08:52:07.371429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.484 "name": "raid_bdev1", 00:14:29.484 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:29.484 "strip_size_kb": 0, 00:14:29.484 "state": "online", 00:14:29.484 "raid_level": "raid1", 00:14:29.484 "superblock": true, 00:14:29.484 "num_base_bdevs": 4, 00:14:29.484 "num_base_bdevs_discovered": 3, 00:14:29.484 "num_base_bdevs_operational": 3, 00:14:29.484 "base_bdevs_list": [ 00:14:29.484 { 00:14:29.484 "name": "spare", 00:14:29.484 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:29.484 "is_configured": true, 00:14:29.484 "data_offset": 2048, 00:14:29.484 "data_size": 63488 00:14:29.484 }, 00:14:29.484 { 00:14:29.484 "name": null, 00:14:29.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.484 "is_configured": false, 00:14:29.484 "data_offset": 2048, 00:14:29.484 "data_size": 63488 00:14:29.484 }, 00:14:29.484 { 00:14:29.484 "name": "BaseBdev3", 00:14:29.484 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:29.484 "is_configured": true, 00:14:29.484 "data_offset": 2048, 00:14:29.484 "data_size": 63488 00:14:29.484 }, 00:14:29.484 { 00:14:29.484 "name": "BaseBdev4", 00:14:29.484 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:29.484 "is_configured": true, 00:14:29.484 "data_offset": 2048, 00:14:29.484 "data_size": 63488 00:14:29.484 } 00:14:29.484 ] 00:14:29.484 }' 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.484 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.052 "name": "raid_bdev1", 00:14:30.052 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:30.052 "strip_size_kb": 0, 00:14:30.052 "state": "online", 00:14:30.052 "raid_level": "raid1", 00:14:30.052 "superblock": true, 00:14:30.052 "num_base_bdevs": 4, 00:14:30.052 "num_base_bdevs_discovered": 3, 00:14:30.052 "num_base_bdevs_operational": 3, 00:14:30.052 "base_bdevs_list": [ 00:14:30.052 { 00:14:30.052 "name": "spare", 00:14:30.052 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:30.052 "is_configured": true, 00:14:30.052 "data_offset": 2048, 00:14:30.052 "data_size": 63488 00:14:30.052 }, 00:14:30.052 { 00:14:30.052 "name": null, 00:14:30.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.052 "is_configured": false, 00:14:30.052 "data_offset": 2048, 00:14:30.052 "data_size": 63488 00:14:30.052 }, 00:14:30.052 { 00:14:30.052 "name": "BaseBdev3", 00:14:30.052 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:30.052 "is_configured": true, 00:14:30.052 "data_offset": 2048, 00:14:30.052 "data_size": 63488 00:14:30.052 }, 00:14:30.052 { 00:14:30.052 "name": "BaseBdev4", 00:14:30.052 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:30.052 "is_configured": true, 00:14:30.052 "data_offset": 2048, 00:14:30.052 "data_size": 63488 00:14:30.052 } 00:14:30.052 ] 00:14:30.052 }' 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.052 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 [2024-09-28 08:52:07.926952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.053 "name": "raid_bdev1", 00:14:30.053 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:30.053 "strip_size_kb": 0, 00:14:30.053 "state": "online", 00:14:30.053 "raid_level": "raid1", 00:14:30.053 "superblock": true, 00:14:30.053 "num_base_bdevs": 4, 00:14:30.053 "num_base_bdevs_discovered": 2, 00:14:30.053 "num_base_bdevs_operational": 2, 00:14:30.053 "base_bdevs_list": [ 00:14:30.053 { 00:14:30.053 "name": null, 00:14:30.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.053 "is_configured": false, 00:14:30.053 "data_offset": 0, 00:14:30.053 "data_size": 63488 00:14:30.053 }, 00:14:30.053 { 00:14:30.053 "name": null, 00:14:30.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.053 "is_configured": false, 00:14:30.053 "data_offset": 2048, 00:14:30.053 "data_size": 63488 00:14:30.053 }, 00:14:30.053 { 00:14:30.053 "name": "BaseBdev3", 00:14:30.053 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:30.053 "is_configured": true, 00:14:30.053 "data_offset": 2048, 00:14:30.053 "data_size": 63488 00:14:30.053 }, 00:14:30.053 { 00:14:30.053 "name": "BaseBdev4", 00:14:30.053 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:30.053 "is_configured": true, 00:14:30.053 "data_offset": 2048, 00:14:30.053 "data_size": 63488 00:14:30.053 } 00:14:30.053 ] 00:14:30.053 }' 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.053 08:52:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.620 08:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.620 08:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.620 08:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.620 [2024-09-28 08:52:08.346305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.620 [2024-09-28 08:52:08.346488] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:30.620 [2024-09-28 08:52:08.346507] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:30.620 [2024-09-28 08:52:08.346543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.621 [2024-09-28 08:52:08.360573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:30.621 08:52:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.621 08:52:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:30.621 [2024-09-28 08:52:08.362490] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.557 "name": "raid_bdev1", 00:14:31.557 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:31.557 "strip_size_kb": 0, 00:14:31.557 "state": "online", 00:14:31.557 "raid_level": "raid1", 00:14:31.557 "superblock": true, 00:14:31.557 "num_base_bdevs": 4, 00:14:31.557 "num_base_bdevs_discovered": 3, 00:14:31.557 "num_base_bdevs_operational": 3, 00:14:31.557 "process": { 00:14:31.557 "type": "rebuild", 00:14:31.557 "target": "spare", 00:14:31.557 "progress": { 00:14:31.557 "blocks": 20480, 00:14:31.557 "percent": 32 00:14:31.557 } 00:14:31.557 }, 00:14:31.557 "base_bdevs_list": [ 00:14:31.557 { 00:14:31.557 "name": "spare", 00:14:31.557 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:31.557 "is_configured": true, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "name": null, 00:14:31.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.557 "is_configured": false, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "name": "BaseBdev3", 00:14:31.557 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:31.557 "is_configured": true, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 }, 00:14:31.557 { 00:14:31.557 "name": "BaseBdev4", 00:14:31.557 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:31.557 "is_configured": true, 00:14:31.557 "data_offset": 2048, 00:14:31.557 "data_size": 63488 00:14:31.557 } 00:14:31.557 ] 00:14:31.557 }' 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.557 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.557 [2024-09-28 08:52:09.494412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.816 [2024-09-28 08:52:09.571015] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.816 [2024-09-28 08:52:09.571073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.816 [2024-09-28 08:52:09.571092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.816 [2024-09-28 08:52:09.571100] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.816 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.816 "name": "raid_bdev1", 00:14:31.816 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:31.816 "strip_size_kb": 0, 00:14:31.816 "state": "online", 00:14:31.816 "raid_level": "raid1", 00:14:31.816 "superblock": true, 00:14:31.817 "num_base_bdevs": 4, 00:14:31.817 "num_base_bdevs_discovered": 2, 00:14:31.817 "num_base_bdevs_operational": 2, 00:14:31.817 "base_bdevs_list": [ 00:14:31.817 { 00:14:31.817 "name": null, 00:14:31.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.817 "is_configured": false, 00:14:31.817 "data_offset": 0, 00:14:31.817 "data_size": 63488 00:14:31.817 }, 00:14:31.817 { 00:14:31.817 "name": null, 00:14:31.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.817 "is_configured": false, 00:14:31.817 "data_offset": 2048, 00:14:31.817 "data_size": 63488 00:14:31.817 }, 00:14:31.817 { 00:14:31.817 "name": "BaseBdev3", 00:14:31.817 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:31.817 "is_configured": true, 00:14:31.817 "data_offset": 2048, 00:14:31.817 "data_size": 63488 00:14:31.817 }, 00:14:31.817 { 00:14:31.817 "name": "BaseBdev4", 00:14:31.817 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:31.817 "is_configured": true, 00:14:31.817 "data_offset": 2048, 00:14:31.817 "data_size": 63488 00:14:31.817 } 00:14:31.817 ] 00:14:31.817 }' 00:14:31.817 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.817 08:52:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.076 08:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.076 08:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.076 08:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.076 [2024-09-28 08:52:10.015334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.076 [2024-09-28 08:52:10.015397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.076 [2024-09-28 08:52:10.015426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:32.076 [2024-09-28 08:52:10.015437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.076 [2024-09-28 08:52:10.016011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.076 [2024-09-28 08:52:10.016038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.076 [2024-09-28 08:52:10.016134] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:32.076 [2024-09-28 08:52:10.016153] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:32.076 [2024-09-28 08:52:10.016166] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:32.076 [2024-09-28 08:52:10.016195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.076 [2024-09-28 08:52:10.030225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:32.076 spare 00:14:32.076 08:52:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.076 08:52:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:32.076 [2024-09-28 08:52:10.032378] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.453 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.454 "name": "raid_bdev1", 00:14:33.454 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:33.454 "strip_size_kb": 0, 00:14:33.454 "state": "online", 00:14:33.454 "raid_level": "raid1", 00:14:33.454 "superblock": true, 00:14:33.454 "num_base_bdevs": 4, 00:14:33.454 "num_base_bdevs_discovered": 3, 00:14:33.454 "num_base_bdevs_operational": 3, 00:14:33.454 "process": { 00:14:33.454 "type": "rebuild", 00:14:33.454 "target": "spare", 00:14:33.454 "progress": { 00:14:33.454 "blocks": 20480, 00:14:33.454 "percent": 32 00:14:33.454 } 00:14:33.454 }, 00:14:33.454 "base_bdevs_list": [ 00:14:33.454 { 00:14:33.454 "name": "spare", 00:14:33.454 "uuid": "55521ff6-bd4b-5789-814f-51247cc411c9", 00:14:33.454 "is_configured": true, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": null, 00:14:33.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.454 "is_configured": false, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": "BaseBdev3", 00:14:33.454 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:33.454 "is_configured": true, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": "BaseBdev4", 00:14:33.454 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:33.454 "is_configured": true, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 } 00:14:33.454 ] 00:14:33.454 }' 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.454 [2024-09-28 08:52:11.197136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.454 [2024-09-28 08:52:11.240970] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.454 [2024-09-28 08:52:11.241060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.454 [2024-09-28 08:52:11.241079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.454 [2024-09-28 08:52:11.241092] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.454 "name": "raid_bdev1", 00:14:33.454 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:33.454 "strip_size_kb": 0, 00:14:33.454 "state": "online", 00:14:33.454 "raid_level": "raid1", 00:14:33.454 "superblock": true, 00:14:33.454 "num_base_bdevs": 4, 00:14:33.454 "num_base_bdevs_discovered": 2, 00:14:33.454 "num_base_bdevs_operational": 2, 00:14:33.454 "base_bdevs_list": [ 00:14:33.454 { 00:14:33.454 "name": null, 00:14:33.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.454 "is_configured": false, 00:14:33.454 "data_offset": 0, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": null, 00:14:33.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.454 "is_configured": false, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": "BaseBdev3", 00:14:33.454 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:33.454 "is_configured": true, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 }, 00:14:33.454 { 00:14:33.454 "name": "BaseBdev4", 00:14:33.454 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:33.454 "is_configured": true, 00:14:33.454 "data_offset": 2048, 00:14:33.454 "data_size": 63488 00:14:33.454 } 00:14:33.454 ] 00:14:33.454 }' 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.454 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.714 "name": "raid_bdev1", 00:14:33.714 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:33.714 "strip_size_kb": 0, 00:14:33.714 "state": "online", 00:14:33.714 "raid_level": "raid1", 00:14:33.714 "superblock": true, 00:14:33.714 "num_base_bdevs": 4, 00:14:33.714 "num_base_bdevs_discovered": 2, 00:14:33.714 "num_base_bdevs_operational": 2, 00:14:33.714 "base_bdevs_list": [ 00:14:33.714 { 00:14:33.714 "name": null, 00:14:33.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.714 "is_configured": false, 00:14:33.714 "data_offset": 0, 00:14:33.714 "data_size": 63488 00:14:33.714 }, 00:14:33.714 { 00:14:33.714 "name": null, 00:14:33.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.714 "is_configured": false, 00:14:33.714 "data_offset": 2048, 00:14:33.714 "data_size": 63488 00:14:33.714 }, 00:14:33.714 { 00:14:33.714 "name": "BaseBdev3", 00:14:33.714 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:33.714 "is_configured": true, 00:14:33.714 "data_offset": 2048, 00:14:33.714 "data_size": 63488 00:14:33.714 }, 00:14:33.714 { 00:14:33.714 "name": "BaseBdev4", 00:14:33.714 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:33.714 "is_configured": true, 00:14:33.714 "data_offset": 2048, 00:14:33.714 "data_size": 63488 00:14:33.714 } 00:14:33.714 ] 00:14:33.714 }' 00:14:33.714 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.975 [2024-09-28 08:52:11.800735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:33.975 [2024-09-28 08:52:11.800792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.975 [2024-09-28 08:52:11.800812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:33.975 [2024-09-28 08:52:11.800823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.975 [2024-09-28 08:52:11.801329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.975 [2024-09-28 08:52:11.801360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.975 [2024-09-28 08:52:11.801446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:33.975 [2024-09-28 08:52:11.801469] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:33.975 [2024-09-28 08:52:11.801479] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:33.975 [2024-09-28 08:52:11.801491] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:33.975 BaseBdev1 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.975 08:52:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.915 "name": "raid_bdev1", 00:14:34.915 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:34.915 "strip_size_kb": 0, 00:14:34.915 "state": "online", 00:14:34.915 "raid_level": "raid1", 00:14:34.915 "superblock": true, 00:14:34.915 "num_base_bdevs": 4, 00:14:34.915 "num_base_bdevs_discovered": 2, 00:14:34.915 "num_base_bdevs_operational": 2, 00:14:34.915 "base_bdevs_list": [ 00:14:34.915 { 00:14:34.915 "name": null, 00:14:34.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.915 "is_configured": false, 00:14:34.915 "data_offset": 0, 00:14:34.915 "data_size": 63488 00:14:34.915 }, 00:14:34.915 { 00:14:34.915 "name": null, 00:14:34.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.915 "is_configured": false, 00:14:34.915 "data_offset": 2048, 00:14:34.915 "data_size": 63488 00:14:34.915 }, 00:14:34.915 { 00:14:34.915 "name": "BaseBdev3", 00:14:34.915 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:34.915 "is_configured": true, 00:14:34.915 "data_offset": 2048, 00:14:34.915 "data_size": 63488 00:14:34.915 }, 00:14:34.915 { 00:14:34.915 "name": "BaseBdev4", 00:14:34.915 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:34.915 "is_configured": true, 00:14:34.915 "data_offset": 2048, 00:14:34.915 "data_size": 63488 00:14:34.915 } 00:14:34.915 ] 00:14:34.915 }' 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.915 08:52:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.485 "name": "raid_bdev1", 00:14:35.485 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:35.485 "strip_size_kb": 0, 00:14:35.485 "state": "online", 00:14:35.485 "raid_level": "raid1", 00:14:35.485 "superblock": true, 00:14:35.485 "num_base_bdevs": 4, 00:14:35.485 "num_base_bdevs_discovered": 2, 00:14:35.485 "num_base_bdevs_operational": 2, 00:14:35.485 "base_bdevs_list": [ 00:14:35.485 { 00:14:35.485 "name": null, 00:14:35.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.485 "is_configured": false, 00:14:35.485 "data_offset": 0, 00:14:35.485 "data_size": 63488 00:14:35.485 }, 00:14:35.485 { 00:14:35.485 "name": null, 00:14:35.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.485 "is_configured": false, 00:14:35.485 "data_offset": 2048, 00:14:35.485 "data_size": 63488 00:14:35.485 }, 00:14:35.485 { 00:14:35.485 "name": "BaseBdev3", 00:14:35.485 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:35.485 "is_configured": true, 00:14:35.485 "data_offset": 2048, 00:14:35.485 "data_size": 63488 00:14:35.485 }, 00:14:35.485 { 00:14:35.485 "name": "BaseBdev4", 00:14:35.485 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:35.485 "is_configured": true, 00:14:35.485 "data_offset": 2048, 00:14:35.485 "data_size": 63488 00:14:35.485 } 00:14:35.485 ] 00:14:35.485 }' 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.485 [2024-09-28 08:52:13.438429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.485 [2024-09-28 08:52:13.438623] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:35.485 [2024-09-28 08:52:13.438636] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:35.485 request: 00:14:35.485 { 00:14:35.485 "base_bdev": "BaseBdev1", 00:14:35.485 "raid_bdev": "raid_bdev1", 00:14:35.485 "method": "bdev_raid_add_base_bdev", 00:14:35.485 "req_id": 1 00:14:35.485 } 00:14:35.485 Got JSON-RPC error response 00:14:35.485 response: 00:14:35.485 { 00:14:35.485 "code": -22, 00:14:35.485 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:35.485 } 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:35.485 08:52:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.867 "name": "raid_bdev1", 00:14:36.867 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:36.867 "strip_size_kb": 0, 00:14:36.867 "state": "online", 00:14:36.867 "raid_level": "raid1", 00:14:36.867 "superblock": true, 00:14:36.867 "num_base_bdevs": 4, 00:14:36.867 "num_base_bdevs_discovered": 2, 00:14:36.867 "num_base_bdevs_operational": 2, 00:14:36.867 "base_bdevs_list": [ 00:14:36.867 { 00:14:36.867 "name": null, 00:14:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.867 "is_configured": false, 00:14:36.867 "data_offset": 0, 00:14:36.867 "data_size": 63488 00:14:36.867 }, 00:14:36.867 { 00:14:36.867 "name": null, 00:14:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.867 "is_configured": false, 00:14:36.867 "data_offset": 2048, 00:14:36.867 "data_size": 63488 00:14:36.867 }, 00:14:36.867 { 00:14:36.867 "name": "BaseBdev3", 00:14:36.867 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:36.867 "is_configured": true, 00:14:36.867 "data_offset": 2048, 00:14:36.867 "data_size": 63488 00:14:36.867 }, 00:14:36.867 { 00:14:36.867 "name": "BaseBdev4", 00:14:36.867 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:36.867 "is_configured": true, 00:14:36.867 "data_offset": 2048, 00:14:36.867 "data_size": 63488 00:14:36.867 } 00:14:36.867 ] 00:14:36.867 }' 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.867 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.128 "name": "raid_bdev1", 00:14:37.128 "uuid": "f31a19f8-5bcd-46a9-8445-6d2786c8cdc4", 00:14:37.128 "strip_size_kb": 0, 00:14:37.128 "state": "online", 00:14:37.128 "raid_level": "raid1", 00:14:37.128 "superblock": true, 00:14:37.128 "num_base_bdevs": 4, 00:14:37.128 "num_base_bdevs_discovered": 2, 00:14:37.128 "num_base_bdevs_operational": 2, 00:14:37.128 "base_bdevs_list": [ 00:14:37.128 { 00:14:37.128 "name": null, 00:14:37.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.128 "is_configured": false, 00:14:37.128 "data_offset": 0, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": null, 00:14:37.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.128 "is_configured": false, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev3", 00:14:37.128 "uuid": "a135011a-1a23-5ce4-bb1f-2518c3804857", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 }, 00:14:37.128 { 00:14:37.128 "name": "BaseBdev4", 00:14:37.128 "uuid": "31075842-e6d5-5caa-919c-f8992ed68e5a", 00:14:37.128 "is_configured": true, 00:14:37.128 "data_offset": 2048, 00:14:37.128 "data_size": 63488 00:14:37.128 } 00:14:37.128 ] 00:14:37.128 }' 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.128 08:52:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79156 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79156 ']' 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79156 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79156 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.128 killing process with pid 79156 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79156' 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79156 00:14:37.128 Received shutdown signal, test time was about 17.764586 seconds 00:14:37.128 00:14:37.128 Latency(us) 00:14:37.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.128 =================================================================================================================== 00:14:37.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.128 [2024-09-28 08:52:15.041426] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.128 [2024-09-28 08:52:15.041567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.128 08:52:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79156 00:14:37.128 [2024-09-28 08:52:15.041670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.128 [2024-09-28 08:52:15.041685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:37.698 [2024-09-28 08:52:15.483118] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.080 08:52:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:39.080 00:14:39.080 real 0m21.380s 00:14:39.080 user 0m27.606s 00:14:39.080 sys 0m2.723s 00:14:39.080 08:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.080 08:52:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.080 ************************************ 00:14:39.080 END TEST raid_rebuild_test_sb_io 00:14:39.080 ************************************ 00:14:39.080 08:52:16 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:39.080 08:52:16 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:39.080 08:52:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:39.080 08:52:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.080 08:52:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.080 ************************************ 00:14:39.080 START TEST raid5f_state_function_test 00:14:39.080 ************************************ 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79877 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:39.080 Process raid pid: 79877 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79877' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79877 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79877 ']' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.080 08:52:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.080 [2024-09-28 08:52:17.056612] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:39.080 [2024-09-28 08:52:17.056743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.340 [2024-09-28 08:52:17.228983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.600 [2024-09-28 08:52:17.477956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.860 [2024-09-28 08:52:17.720879] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.860 [2024-09-28 08:52:17.720919] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.120 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.120 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:40.120 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.120 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.120 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.120 [2024-09-28 08:52:17.887556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.120 [2024-09-28 08:52:17.887615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.120 [2024-09-28 08:52:17.887625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.120 [2024-09-28 08:52:17.887635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.121 [2024-09-28 08:52:17.887641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.121 [2024-09-28 08:52:17.887663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.121 "name": "Existed_Raid", 00:14:40.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.121 "strip_size_kb": 64, 00:14:40.121 "state": "configuring", 00:14:40.121 "raid_level": "raid5f", 00:14:40.121 "superblock": false, 00:14:40.121 "num_base_bdevs": 3, 00:14:40.121 "num_base_bdevs_discovered": 0, 00:14:40.121 "num_base_bdevs_operational": 3, 00:14:40.121 "base_bdevs_list": [ 00:14:40.121 { 00:14:40.121 "name": "BaseBdev1", 00:14:40.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.121 "is_configured": false, 00:14:40.121 "data_offset": 0, 00:14:40.121 "data_size": 0 00:14:40.121 }, 00:14:40.121 { 00:14:40.121 "name": "BaseBdev2", 00:14:40.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.121 "is_configured": false, 00:14:40.121 "data_offset": 0, 00:14:40.121 "data_size": 0 00:14:40.121 }, 00:14:40.121 { 00:14:40.121 "name": "BaseBdev3", 00:14:40.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.121 "is_configured": false, 00:14:40.121 "data_offset": 0, 00:14:40.121 "data_size": 0 00:14:40.121 } 00:14:40.121 ] 00:14:40.121 }' 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.121 08:52:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.386 [2024-09-28 08:52:18.294799] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.386 [2024-09-28 08:52:18.294842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.386 [2024-09-28 08:52:18.306810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.386 [2024-09-28 08:52:18.306852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.386 [2024-09-28 08:52:18.306861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.386 [2024-09-28 08:52:18.306871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.386 [2024-09-28 08:52:18.306877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.386 [2024-09-28 08:52:18.306887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.386 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.646 [2024-09-28 08:52:18.395438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.646 BaseBdev1 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.646 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.646 [ 00:14:40.646 { 00:14:40.646 "name": "BaseBdev1", 00:14:40.646 "aliases": [ 00:14:40.646 "69ced13b-714a-486e-81a7-8a6408ce5c8b" 00:14:40.646 ], 00:14:40.646 "product_name": "Malloc disk", 00:14:40.646 "block_size": 512, 00:14:40.646 "num_blocks": 65536, 00:14:40.646 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:40.646 "assigned_rate_limits": { 00:14:40.646 "rw_ios_per_sec": 0, 00:14:40.646 "rw_mbytes_per_sec": 0, 00:14:40.646 "r_mbytes_per_sec": 0, 00:14:40.646 "w_mbytes_per_sec": 0 00:14:40.646 }, 00:14:40.646 "claimed": true, 00:14:40.646 "claim_type": "exclusive_write", 00:14:40.646 "zoned": false, 00:14:40.646 "supported_io_types": { 00:14:40.646 "read": true, 00:14:40.646 "write": true, 00:14:40.646 "unmap": true, 00:14:40.646 "flush": true, 00:14:40.646 "reset": true, 00:14:40.646 "nvme_admin": false, 00:14:40.646 "nvme_io": false, 00:14:40.646 "nvme_io_md": false, 00:14:40.646 "write_zeroes": true, 00:14:40.646 "zcopy": true, 00:14:40.646 "get_zone_info": false, 00:14:40.646 "zone_management": false, 00:14:40.646 "zone_append": false, 00:14:40.646 "compare": false, 00:14:40.646 "compare_and_write": false, 00:14:40.646 "abort": true, 00:14:40.646 "seek_hole": false, 00:14:40.646 "seek_data": false, 00:14:40.646 "copy": true, 00:14:40.646 "nvme_iov_md": false 00:14:40.646 }, 00:14:40.646 "memory_domains": [ 00:14:40.646 { 00:14:40.647 "dma_device_id": "system", 00:14:40.647 "dma_device_type": 1 00:14:40.647 }, 00:14:40.647 { 00:14:40.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.647 "dma_device_type": 2 00:14:40.647 } 00:14:40.647 ], 00:14:40.647 "driver_specific": {} 00:14:40.647 } 00:14:40.647 ] 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.647 "name": "Existed_Raid", 00:14:40.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.647 "strip_size_kb": 64, 00:14:40.647 "state": "configuring", 00:14:40.647 "raid_level": "raid5f", 00:14:40.647 "superblock": false, 00:14:40.647 "num_base_bdevs": 3, 00:14:40.647 "num_base_bdevs_discovered": 1, 00:14:40.647 "num_base_bdevs_operational": 3, 00:14:40.647 "base_bdevs_list": [ 00:14:40.647 { 00:14:40.647 "name": "BaseBdev1", 00:14:40.647 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:40.647 "is_configured": true, 00:14:40.647 "data_offset": 0, 00:14:40.647 "data_size": 65536 00:14:40.647 }, 00:14:40.647 { 00:14:40.647 "name": "BaseBdev2", 00:14:40.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.647 "is_configured": false, 00:14:40.647 "data_offset": 0, 00:14:40.647 "data_size": 0 00:14:40.647 }, 00:14:40.647 { 00:14:40.647 "name": "BaseBdev3", 00:14:40.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.647 "is_configured": false, 00:14:40.647 "data_offset": 0, 00:14:40.647 "data_size": 0 00:14:40.647 } 00:14:40.647 ] 00:14:40.647 }' 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.647 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.907 [2024-09-28 08:52:18.886681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:40.907 [2024-09-28 08:52:18.886772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.907 [2024-09-28 08:52:18.894718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.907 [2024-09-28 08:52:18.896906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.907 [2024-09-28 08:52:18.896980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.907 [2024-09-28 08:52:18.897009] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.907 [2024-09-28 08:52:18.897032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.907 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.167 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.167 "name": "Existed_Raid", 00:14:41.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.167 "strip_size_kb": 64, 00:14:41.167 "state": "configuring", 00:14:41.167 "raid_level": "raid5f", 00:14:41.168 "superblock": false, 00:14:41.168 "num_base_bdevs": 3, 00:14:41.168 "num_base_bdevs_discovered": 1, 00:14:41.168 "num_base_bdevs_operational": 3, 00:14:41.168 "base_bdevs_list": [ 00:14:41.168 { 00:14:41.168 "name": "BaseBdev1", 00:14:41.168 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:41.168 "is_configured": true, 00:14:41.168 "data_offset": 0, 00:14:41.168 "data_size": 65536 00:14:41.168 }, 00:14:41.168 { 00:14:41.168 "name": "BaseBdev2", 00:14:41.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.168 "is_configured": false, 00:14:41.168 "data_offset": 0, 00:14:41.168 "data_size": 0 00:14:41.168 }, 00:14:41.168 { 00:14:41.168 "name": "BaseBdev3", 00:14:41.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.168 "is_configured": false, 00:14:41.168 "data_offset": 0, 00:14:41.168 "data_size": 0 00:14:41.168 } 00:14:41.168 ] 00:14:41.168 }' 00:14:41.168 08:52:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.168 08:52:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 [2024-09-28 08:52:19.351947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.428 BaseBdev2 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 [ 00:14:41.428 { 00:14:41.428 "name": "BaseBdev2", 00:14:41.428 "aliases": [ 00:14:41.428 "a552b663-bcd8-41f1-b023-cc03b9e063f3" 00:14:41.428 ], 00:14:41.428 "product_name": "Malloc disk", 00:14:41.428 "block_size": 512, 00:14:41.428 "num_blocks": 65536, 00:14:41.428 "uuid": "a552b663-bcd8-41f1-b023-cc03b9e063f3", 00:14:41.428 "assigned_rate_limits": { 00:14:41.428 "rw_ios_per_sec": 0, 00:14:41.428 "rw_mbytes_per_sec": 0, 00:14:41.428 "r_mbytes_per_sec": 0, 00:14:41.428 "w_mbytes_per_sec": 0 00:14:41.428 }, 00:14:41.428 "claimed": true, 00:14:41.428 "claim_type": "exclusive_write", 00:14:41.428 "zoned": false, 00:14:41.428 "supported_io_types": { 00:14:41.428 "read": true, 00:14:41.428 "write": true, 00:14:41.428 "unmap": true, 00:14:41.428 "flush": true, 00:14:41.428 "reset": true, 00:14:41.428 "nvme_admin": false, 00:14:41.428 "nvme_io": false, 00:14:41.428 "nvme_io_md": false, 00:14:41.428 "write_zeroes": true, 00:14:41.428 "zcopy": true, 00:14:41.428 "get_zone_info": false, 00:14:41.428 "zone_management": false, 00:14:41.428 "zone_append": false, 00:14:41.428 "compare": false, 00:14:41.428 "compare_and_write": false, 00:14:41.428 "abort": true, 00:14:41.428 "seek_hole": false, 00:14:41.428 "seek_data": false, 00:14:41.428 "copy": true, 00:14:41.428 "nvme_iov_md": false 00:14:41.428 }, 00:14:41.428 "memory_domains": [ 00:14:41.428 { 00:14:41.428 "dma_device_id": "system", 00:14:41.428 "dma_device_type": 1 00:14:41.428 }, 00:14:41.428 { 00:14:41.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.428 "dma_device_type": 2 00:14:41.428 } 00:14:41.428 ], 00:14:41.428 "driver_specific": {} 00:14:41.428 } 00:14:41.428 ] 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.428 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.688 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.688 "name": "Existed_Raid", 00:14:41.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.688 "strip_size_kb": 64, 00:14:41.688 "state": "configuring", 00:14:41.688 "raid_level": "raid5f", 00:14:41.688 "superblock": false, 00:14:41.688 "num_base_bdevs": 3, 00:14:41.688 "num_base_bdevs_discovered": 2, 00:14:41.688 "num_base_bdevs_operational": 3, 00:14:41.688 "base_bdevs_list": [ 00:14:41.688 { 00:14:41.688 "name": "BaseBdev1", 00:14:41.688 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:41.688 "is_configured": true, 00:14:41.688 "data_offset": 0, 00:14:41.688 "data_size": 65536 00:14:41.688 }, 00:14:41.688 { 00:14:41.688 "name": "BaseBdev2", 00:14:41.688 "uuid": "a552b663-bcd8-41f1-b023-cc03b9e063f3", 00:14:41.688 "is_configured": true, 00:14:41.688 "data_offset": 0, 00:14:41.688 "data_size": 65536 00:14:41.688 }, 00:14:41.688 { 00:14:41.688 "name": "BaseBdev3", 00:14:41.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.688 "is_configured": false, 00:14:41.688 "data_offset": 0, 00:14:41.688 "data_size": 0 00:14:41.688 } 00:14:41.688 ] 00:14:41.688 }' 00:14:41.688 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.688 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.948 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:41.948 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.948 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.948 [2024-09-28 08:52:19.875977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.949 [2024-09-28 08:52:19.876104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:41.949 [2024-09-28 08:52:19.876150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:41.949 [2024-09-28 08:52:19.876478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:41.949 [2024-09-28 08:52:19.882332] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:41.949 [2024-09-28 08:52:19.882384] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:41.949 [2024-09-28 08:52:19.882730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.949 BaseBdev3 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.949 [ 00:14:41.949 { 00:14:41.949 "name": "BaseBdev3", 00:14:41.949 "aliases": [ 00:14:41.949 "03bc5f89-69f1-460e-8513-6622692c4575" 00:14:41.949 ], 00:14:41.949 "product_name": "Malloc disk", 00:14:41.949 "block_size": 512, 00:14:41.949 "num_blocks": 65536, 00:14:41.949 "uuid": "03bc5f89-69f1-460e-8513-6622692c4575", 00:14:41.949 "assigned_rate_limits": { 00:14:41.949 "rw_ios_per_sec": 0, 00:14:41.949 "rw_mbytes_per_sec": 0, 00:14:41.949 "r_mbytes_per_sec": 0, 00:14:41.949 "w_mbytes_per_sec": 0 00:14:41.949 }, 00:14:41.949 "claimed": true, 00:14:41.949 "claim_type": "exclusive_write", 00:14:41.949 "zoned": false, 00:14:41.949 "supported_io_types": { 00:14:41.949 "read": true, 00:14:41.949 "write": true, 00:14:41.949 "unmap": true, 00:14:41.949 "flush": true, 00:14:41.949 "reset": true, 00:14:41.949 "nvme_admin": false, 00:14:41.949 "nvme_io": false, 00:14:41.949 "nvme_io_md": false, 00:14:41.949 "write_zeroes": true, 00:14:41.949 "zcopy": true, 00:14:41.949 "get_zone_info": false, 00:14:41.949 "zone_management": false, 00:14:41.949 "zone_append": false, 00:14:41.949 "compare": false, 00:14:41.949 "compare_and_write": false, 00:14:41.949 "abort": true, 00:14:41.949 "seek_hole": false, 00:14:41.949 "seek_data": false, 00:14:41.949 "copy": true, 00:14:41.949 "nvme_iov_md": false 00:14:41.949 }, 00:14:41.949 "memory_domains": [ 00:14:41.949 { 00:14:41.949 "dma_device_id": "system", 00:14:41.949 "dma_device_type": 1 00:14:41.949 }, 00:14:41.949 { 00:14:41.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.949 "dma_device_type": 2 00:14:41.949 } 00:14:41.949 ], 00:14:41.949 "driver_specific": {} 00:14:41.949 } 00:14:41.949 ] 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.949 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.209 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.209 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.209 "name": "Existed_Raid", 00:14:42.209 "uuid": "84ff4c19-958e-458b-b7f2-13d6db91e65e", 00:14:42.209 "strip_size_kb": 64, 00:14:42.209 "state": "online", 00:14:42.209 "raid_level": "raid5f", 00:14:42.209 "superblock": false, 00:14:42.209 "num_base_bdevs": 3, 00:14:42.209 "num_base_bdevs_discovered": 3, 00:14:42.209 "num_base_bdevs_operational": 3, 00:14:42.209 "base_bdevs_list": [ 00:14:42.209 { 00:14:42.209 "name": "BaseBdev1", 00:14:42.209 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:42.209 "is_configured": true, 00:14:42.209 "data_offset": 0, 00:14:42.209 "data_size": 65536 00:14:42.209 }, 00:14:42.209 { 00:14:42.209 "name": "BaseBdev2", 00:14:42.209 "uuid": "a552b663-bcd8-41f1-b023-cc03b9e063f3", 00:14:42.209 "is_configured": true, 00:14:42.209 "data_offset": 0, 00:14:42.209 "data_size": 65536 00:14:42.209 }, 00:14:42.209 { 00:14:42.209 "name": "BaseBdev3", 00:14:42.209 "uuid": "03bc5f89-69f1-460e-8513-6622692c4575", 00:14:42.209 "is_configured": true, 00:14:42.209 "data_offset": 0, 00:14:42.209 "data_size": 65536 00:14:42.209 } 00:14:42.209 ] 00:14:42.209 }' 00:14:42.209 08:52:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.209 08:52:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.469 [2024-09-28 08:52:20.405132] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.469 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.469 "name": "Existed_Raid", 00:14:42.469 "aliases": [ 00:14:42.469 "84ff4c19-958e-458b-b7f2-13d6db91e65e" 00:14:42.469 ], 00:14:42.469 "product_name": "Raid Volume", 00:14:42.469 "block_size": 512, 00:14:42.469 "num_blocks": 131072, 00:14:42.469 "uuid": "84ff4c19-958e-458b-b7f2-13d6db91e65e", 00:14:42.469 "assigned_rate_limits": { 00:14:42.469 "rw_ios_per_sec": 0, 00:14:42.469 "rw_mbytes_per_sec": 0, 00:14:42.469 "r_mbytes_per_sec": 0, 00:14:42.469 "w_mbytes_per_sec": 0 00:14:42.469 }, 00:14:42.469 "claimed": false, 00:14:42.469 "zoned": false, 00:14:42.469 "supported_io_types": { 00:14:42.469 "read": true, 00:14:42.469 "write": true, 00:14:42.469 "unmap": false, 00:14:42.469 "flush": false, 00:14:42.469 "reset": true, 00:14:42.469 "nvme_admin": false, 00:14:42.469 "nvme_io": false, 00:14:42.469 "nvme_io_md": false, 00:14:42.469 "write_zeroes": true, 00:14:42.469 "zcopy": false, 00:14:42.469 "get_zone_info": false, 00:14:42.469 "zone_management": false, 00:14:42.469 "zone_append": false, 00:14:42.469 "compare": false, 00:14:42.469 "compare_and_write": false, 00:14:42.469 "abort": false, 00:14:42.469 "seek_hole": false, 00:14:42.469 "seek_data": false, 00:14:42.469 "copy": false, 00:14:42.469 "nvme_iov_md": false 00:14:42.469 }, 00:14:42.469 "driver_specific": { 00:14:42.469 "raid": { 00:14:42.469 "uuid": "84ff4c19-958e-458b-b7f2-13d6db91e65e", 00:14:42.469 "strip_size_kb": 64, 00:14:42.469 "state": "online", 00:14:42.469 "raid_level": "raid5f", 00:14:42.469 "superblock": false, 00:14:42.469 "num_base_bdevs": 3, 00:14:42.469 "num_base_bdevs_discovered": 3, 00:14:42.469 "num_base_bdevs_operational": 3, 00:14:42.469 "base_bdevs_list": [ 00:14:42.469 { 00:14:42.469 "name": "BaseBdev1", 00:14:42.469 "uuid": "69ced13b-714a-486e-81a7-8a6408ce5c8b", 00:14:42.469 "is_configured": true, 00:14:42.469 "data_offset": 0, 00:14:42.469 "data_size": 65536 00:14:42.469 }, 00:14:42.469 { 00:14:42.469 "name": "BaseBdev2", 00:14:42.469 "uuid": "a552b663-bcd8-41f1-b023-cc03b9e063f3", 00:14:42.469 "is_configured": true, 00:14:42.469 "data_offset": 0, 00:14:42.469 "data_size": 65536 00:14:42.469 }, 00:14:42.469 { 00:14:42.469 "name": "BaseBdev3", 00:14:42.469 "uuid": "03bc5f89-69f1-460e-8513-6622692c4575", 00:14:42.469 "is_configured": true, 00:14:42.469 "data_offset": 0, 00:14:42.469 "data_size": 65536 00:14:42.469 } 00:14:42.469 ] 00:14:42.469 } 00:14:42.469 } 00:14:42.469 }' 00:14:42.470 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:42.734 BaseBdev2 00:14:42.734 BaseBdev3' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.734 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.735 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.735 [2024-09-28 08:52:20.684474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.995 "name": "Existed_Raid", 00:14:42.995 "uuid": "84ff4c19-958e-458b-b7f2-13d6db91e65e", 00:14:42.995 "strip_size_kb": 64, 00:14:42.995 "state": "online", 00:14:42.995 "raid_level": "raid5f", 00:14:42.995 "superblock": false, 00:14:42.995 "num_base_bdevs": 3, 00:14:42.995 "num_base_bdevs_discovered": 2, 00:14:42.995 "num_base_bdevs_operational": 2, 00:14:42.995 "base_bdevs_list": [ 00:14:42.995 { 00:14:42.995 "name": null, 00:14:42.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.995 "is_configured": false, 00:14:42.995 "data_offset": 0, 00:14:42.995 "data_size": 65536 00:14:42.995 }, 00:14:42.995 { 00:14:42.995 "name": "BaseBdev2", 00:14:42.995 "uuid": "a552b663-bcd8-41f1-b023-cc03b9e063f3", 00:14:42.995 "is_configured": true, 00:14:42.995 "data_offset": 0, 00:14:42.995 "data_size": 65536 00:14:42.995 }, 00:14:42.995 { 00:14:42.995 "name": "BaseBdev3", 00:14:42.995 "uuid": "03bc5f89-69f1-460e-8513-6622692c4575", 00:14:42.995 "is_configured": true, 00:14:42.995 "data_offset": 0, 00:14:42.995 "data_size": 65536 00:14:42.995 } 00:14:42.995 ] 00:14:42.995 }' 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.995 08:52:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.515 [2024-09-28 08:52:21.261894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.515 [2024-09-28 08:52:21.262057] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.515 [2024-09-28 08:52:21.361153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.515 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.515 [2024-09-28 08:52:21.417043] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:43.515 [2024-09-28 08:52:21.417145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 BaseBdev2 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.776 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.776 [ 00:14:43.776 { 00:14:43.776 "name": "BaseBdev2", 00:14:43.776 "aliases": [ 00:14:43.776 "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32" 00:14:43.776 ], 00:14:43.777 "product_name": "Malloc disk", 00:14:43.777 "block_size": 512, 00:14:43.777 "num_blocks": 65536, 00:14:43.777 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:43.777 "assigned_rate_limits": { 00:14:43.777 "rw_ios_per_sec": 0, 00:14:43.777 "rw_mbytes_per_sec": 0, 00:14:43.777 "r_mbytes_per_sec": 0, 00:14:43.777 "w_mbytes_per_sec": 0 00:14:43.777 }, 00:14:43.777 "claimed": false, 00:14:43.777 "zoned": false, 00:14:43.777 "supported_io_types": { 00:14:43.777 "read": true, 00:14:43.777 "write": true, 00:14:43.777 "unmap": true, 00:14:43.777 "flush": true, 00:14:43.777 "reset": true, 00:14:43.777 "nvme_admin": false, 00:14:43.777 "nvme_io": false, 00:14:43.777 "nvme_io_md": false, 00:14:43.777 "write_zeroes": true, 00:14:43.777 "zcopy": true, 00:14:43.777 "get_zone_info": false, 00:14:43.777 "zone_management": false, 00:14:43.777 "zone_append": false, 00:14:43.777 "compare": false, 00:14:43.777 "compare_and_write": false, 00:14:43.777 "abort": true, 00:14:43.777 "seek_hole": false, 00:14:43.777 "seek_data": false, 00:14:43.777 "copy": true, 00:14:43.777 "nvme_iov_md": false 00:14:43.777 }, 00:14:43.777 "memory_domains": [ 00:14:43.777 { 00:14:43.777 "dma_device_id": "system", 00:14:43.777 "dma_device_type": 1 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.777 "dma_device_type": 2 00:14:43.777 } 00:14:43.777 ], 00:14:43.777 "driver_specific": {} 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.777 BaseBdev3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.777 [ 00:14:43.777 { 00:14:43.777 "name": "BaseBdev3", 00:14:43.777 "aliases": [ 00:14:43.777 "2ec30549-80f6-44d1-9a27-95d4936eeab6" 00:14:43.777 ], 00:14:43.777 "product_name": "Malloc disk", 00:14:43.777 "block_size": 512, 00:14:43.777 "num_blocks": 65536, 00:14:43.777 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:43.777 "assigned_rate_limits": { 00:14:43.777 "rw_ios_per_sec": 0, 00:14:43.777 "rw_mbytes_per_sec": 0, 00:14:43.777 "r_mbytes_per_sec": 0, 00:14:43.777 "w_mbytes_per_sec": 0 00:14:43.777 }, 00:14:43.777 "claimed": false, 00:14:43.777 "zoned": false, 00:14:43.777 "supported_io_types": { 00:14:43.777 "read": true, 00:14:43.777 "write": true, 00:14:43.777 "unmap": true, 00:14:43.777 "flush": true, 00:14:43.777 "reset": true, 00:14:43.777 "nvme_admin": false, 00:14:43.777 "nvme_io": false, 00:14:43.777 "nvme_io_md": false, 00:14:43.777 "write_zeroes": true, 00:14:43.777 "zcopy": true, 00:14:43.777 "get_zone_info": false, 00:14:43.777 "zone_management": false, 00:14:43.777 "zone_append": false, 00:14:43.777 "compare": false, 00:14:43.777 "compare_and_write": false, 00:14:43.777 "abort": true, 00:14:43.777 "seek_hole": false, 00:14:43.777 "seek_data": false, 00:14:43.777 "copy": true, 00:14:43.777 "nvme_iov_md": false 00:14:43.777 }, 00:14:43.777 "memory_domains": [ 00:14:43.777 { 00:14:43.777 "dma_device_id": "system", 00:14:43.777 "dma_device_type": 1 00:14:43.777 }, 00:14:43.777 { 00:14:43.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.777 "dma_device_type": 2 00:14:43.777 } 00:14:43.777 ], 00:14:43.777 "driver_specific": {} 00:14:43.777 } 00:14:43.777 ] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.777 [2024-09-28 08:52:21.745282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.777 [2024-09-28 08:52:21.745369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.777 [2024-09-28 08:52:21.745408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.777 [2024-09-28 08:52:21.747477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.777 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.037 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.037 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.037 "name": "Existed_Raid", 00:14:44.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.037 "strip_size_kb": 64, 00:14:44.037 "state": "configuring", 00:14:44.037 "raid_level": "raid5f", 00:14:44.037 "superblock": false, 00:14:44.037 "num_base_bdevs": 3, 00:14:44.037 "num_base_bdevs_discovered": 2, 00:14:44.037 "num_base_bdevs_operational": 3, 00:14:44.037 "base_bdevs_list": [ 00:14:44.037 { 00:14:44.037 "name": "BaseBdev1", 00:14:44.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.037 "is_configured": false, 00:14:44.037 "data_offset": 0, 00:14:44.037 "data_size": 0 00:14:44.037 }, 00:14:44.037 { 00:14:44.037 "name": "BaseBdev2", 00:14:44.037 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:44.037 "is_configured": true, 00:14:44.037 "data_offset": 0, 00:14:44.037 "data_size": 65536 00:14:44.037 }, 00:14:44.037 { 00:14:44.037 "name": "BaseBdev3", 00:14:44.037 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:44.037 "is_configured": true, 00:14:44.037 "data_offset": 0, 00:14:44.037 "data_size": 65536 00:14:44.037 } 00:14:44.037 ] 00:14:44.037 }' 00:14:44.037 08:52:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.037 08:52:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.297 [2024-09-28 08:52:22.148549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.297 "name": "Existed_Raid", 00:14:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.297 "strip_size_kb": 64, 00:14:44.297 "state": "configuring", 00:14:44.297 "raid_level": "raid5f", 00:14:44.297 "superblock": false, 00:14:44.297 "num_base_bdevs": 3, 00:14:44.297 "num_base_bdevs_discovered": 1, 00:14:44.297 "num_base_bdevs_operational": 3, 00:14:44.297 "base_bdevs_list": [ 00:14:44.297 { 00:14:44.297 "name": "BaseBdev1", 00:14:44.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.297 "is_configured": false, 00:14:44.297 "data_offset": 0, 00:14:44.297 "data_size": 0 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "name": null, 00:14:44.297 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:44.297 "is_configured": false, 00:14:44.297 "data_offset": 0, 00:14:44.297 "data_size": 65536 00:14:44.297 }, 00:14:44.297 { 00:14:44.297 "name": "BaseBdev3", 00:14:44.297 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:44.297 "is_configured": true, 00:14:44.297 "data_offset": 0, 00:14:44.297 "data_size": 65536 00:14:44.297 } 00:14:44.297 ] 00:14:44.297 }' 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.297 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.869 [2024-09-28 08:52:22.653680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.869 BaseBdev1 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.869 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.869 [ 00:14:44.869 { 00:14:44.869 "name": "BaseBdev1", 00:14:44.869 "aliases": [ 00:14:44.869 "49629351-1f88-4125-a09d-3ca0eda8072f" 00:14:44.870 ], 00:14:44.870 "product_name": "Malloc disk", 00:14:44.870 "block_size": 512, 00:14:44.870 "num_blocks": 65536, 00:14:44.870 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:44.870 "assigned_rate_limits": { 00:14:44.870 "rw_ios_per_sec": 0, 00:14:44.870 "rw_mbytes_per_sec": 0, 00:14:44.870 "r_mbytes_per_sec": 0, 00:14:44.870 "w_mbytes_per_sec": 0 00:14:44.870 }, 00:14:44.870 "claimed": true, 00:14:44.870 "claim_type": "exclusive_write", 00:14:44.870 "zoned": false, 00:14:44.870 "supported_io_types": { 00:14:44.870 "read": true, 00:14:44.870 "write": true, 00:14:44.870 "unmap": true, 00:14:44.870 "flush": true, 00:14:44.870 "reset": true, 00:14:44.870 "nvme_admin": false, 00:14:44.870 "nvme_io": false, 00:14:44.870 "nvme_io_md": false, 00:14:44.870 "write_zeroes": true, 00:14:44.870 "zcopy": true, 00:14:44.870 "get_zone_info": false, 00:14:44.870 "zone_management": false, 00:14:44.870 "zone_append": false, 00:14:44.870 "compare": false, 00:14:44.870 "compare_and_write": false, 00:14:44.870 "abort": true, 00:14:44.870 "seek_hole": false, 00:14:44.870 "seek_data": false, 00:14:44.870 "copy": true, 00:14:44.870 "nvme_iov_md": false 00:14:44.870 }, 00:14:44.870 "memory_domains": [ 00:14:44.870 { 00:14:44.870 "dma_device_id": "system", 00:14:44.870 "dma_device_type": 1 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.870 "dma_device_type": 2 00:14:44.870 } 00:14:44.870 ], 00:14:44.870 "driver_specific": {} 00:14:44.870 } 00:14:44.870 ] 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.870 "name": "Existed_Raid", 00:14:44.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.870 "strip_size_kb": 64, 00:14:44.870 "state": "configuring", 00:14:44.870 "raid_level": "raid5f", 00:14:44.870 "superblock": false, 00:14:44.870 "num_base_bdevs": 3, 00:14:44.870 "num_base_bdevs_discovered": 2, 00:14:44.870 "num_base_bdevs_operational": 3, 00:14:44.870 "base_bdevs_list": [ 00:14:44.870 { 00:14:44.870 "name": "BaseBdev1", 00:14:44.870 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:44.870 "is_configured": true, 00:14:44.870 "data_offset": 0, 00:14:44.870 "data_size": 65536 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "name": null, 00:14:44.870 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:44.870 "is_configured": false, 00:14:44.870 "data_offset": 0, 00:14:44.870 "data_size": 65536 00:14:44.870 }, 00:14:44.870 { 00:14:44.870 "name": "BaseBdev3", 00:14:44.870 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:44.870 "is_configured": true, 00:14:44.870 "data_offset": 0, 00:14:44.870 "data_size": 65536 00:14:44.870 } 00:14:44.870 ] 00:14:44.870 }' 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.870 08:52:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.487 [2024-09-28 08:52:23.216740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.487 "name": "Existed_Raid", 00:14:45.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.487 "strip_size_kb": 64, 00:14:45.487 "state": "configuring", 00:14:45.487 "raid_level": "raid5f", 00:14:45.487 "superblock": false, 00:14:45.487 "num_base_bdevs": 3, 00:14:45.487 "num_base_bdevs_discovered": 1, 00:14:45.487 "num_base_bdevs_operational": 3, 00:14:45.487 "base_bdevs_list": [ 00:14:45.487 { 00:14:45.487 "name": "BaseBdev1", 00:14:45.487 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:45.487 "is_configured": true, 00:14:45.487 "data_offset": 0, 00:14:45.487 "data_size": 65536 00:14:45.487 }, 00:14:45.487 { 00:14:45.487 "name": null, 00:14:45.487 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:45.487 "is_configured": false, 00:14:45.487 "data_offset": 0, 00:14:45.487 "data_size": 65536 00:14:45.487 }, 00:14:45.487 { 00:14:45.487 "name": null, 00:14:45.487 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:45.487 "is_configured": false, 00:14:45.487 "data_offset": 0, 00:14:45.487 "data_size": 65536 00:14:45.487 } 00:14:45.487 ] 00:14:45.487 }' 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.487 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.802 [2024-09-28 08:52:23.739833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.802 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.803 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.071 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.071 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.071 "name": "Existed_Raid", 00:14:46.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.071 "strip_size_kb": 64, 00:14:46.071 "state": "configuring", 00:14:46.071 "raid_level": "raid5f", 00:14:46.071 "superblock": false, 00:14:46.071 "num_base_bdevs": 3, 00:14:46.071 "num_base_bdevs_discovered": 2, 00:14:46.071 "num_base_bdevs_operational": 3, 00:14:46.071 "base_bdevs_list": [ 00:14:46.071 { 00:14:46.071 "name": "BaseBdev1", 00:14:46.071 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:46.071 "is_configured": true, 00:14:46.071 "data_offset": 0, 00:14:46.071 "data_size": 65536 00:14:46.071 }, 00:14:46.071 { 00:14:46.071 "name": null, 00:14:46.071 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:46.071 "is_configured": false, 00:14:46.071 "data_offset": 0, 00:14:46.071 "data_size": 65536 00:14:46.071 }, 00:14:46.071 { 00:14:46.071 "name": "BaseBdev3", 00:14:46.071 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:46.071 "is_configured": true, 00:14:46.071 "data_offset": 0, 00:14:46.071 "data_size": 65536 00:14:46.071 } 00:14:46.071 ] 00:14:46.071 }' 00:14:46.071 08:52:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.071 08:52:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.331 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.331 [2024-09-28 08:52:24.251010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.591 "name": "Existed_Raid", 00:14:46.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.591 "strip_size_kb": 64, 00:14:46.591 "state": "configuring", 00:14:46.591 "raid_level": "raid5f", 00:14:46.591 "superblock": false, 00:14:46.591 "num_base_bdevs": 3, 00:14:46.591 "num_base_bdevs_discovered": 1, 00:14:46.591 "num_base_bdevs_operational": 3, 00:14:46.591 "base_bdevs_list": [ 00:14:46.591 { 00:14:46.591 "name": null, 00:14:46.591 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:46.591 "is_configured": false, 00:14:46.591 "data_offset": 0, 00:14:46.591 "data_size": 65536 00:14:46.591 }, 00:14:46.591 { 00:14:46.591 "name": null, 00:14:46.591 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:46.591 "is_configured": false, 00:14:46.591 "data_offset": 0, 00:14:46.591 "data_size": 65536 00:14:46.591 }, 00:14:46.591 { 00:14:46.591 "name": "BaseBdev3", 00:14:46.591 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:46.591 "is_configured": true, 00:14:46.591 "data_offset": 0, 00:14:46.591 "data_size": 65536 00:14:46.591 } 00:14:46.591 ] 00:14:46.591 }' 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.591 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.851 [2024-09-28 08:52:24.821443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.851 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.110 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.110 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.110 "name": "Existed_Raid", 00:14:47.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.110 "strip_size_kb": 64, 00:14:47.110 "state": "configuring", 00:14:47.110 "raid_level": "raid5f", 00:14:47.110 "superblock": false, 00:14:47.110 "num_base_bdevs": 3, 00:14:47.110 "num_base_bdevs_discovered": 2, 00:14:47.110 "num_base_bdevs_operational": 3, 00:14:47.110 "base_bdevs_list": [ 00:14:47.110 { 00:14:47.110 "name": null, 00:14:47.110 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:47.110 "is_configured": false, 00:14:47.110 "data_offset": 0, 00:14:47.110 "data_size": 65536 00:14:47.110 }, 00:14:47.110 { 00:14:47.110 "name": "BaseBdev2", 00:14:47.110 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:47.110 "is_configured": true, 00:14:47.110 "data_offset": 0, 00:14:47.110 "data_size": 65536 00:14:47.110 }, 00:14:47.110 { 00:14:47.110 "name": "BaseBdev3", 00:14:47.110 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:47.110 "is_configured": true, 00:14:47.110 "data_offset": 0, 00:14:47.110 "data_size": 65536 00:14:47.110 } 00:14:47.110 ] 00:14:47.110 }' 00:14:47.110 08:52:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.111 08:52:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49629351-1f88-4125-a09d-3ca0eda8072f 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.370 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.630 [2024-09-28 08:52:25.366722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:47.630 [2024-09-28 08:52:25.366826] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:47.630 [2024-09-28 08:52:25.366843] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:47.630 [2024-09-28 08:52:25.367140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:47.630 [2024-09-28 08:52:25.372618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:47.630 [2024-09-28 08:52:25.372643] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:47.630 [2024-09-28 08:52:25.372956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.630 NewBaseBdev 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.630 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.630 [ 00:14:47.630 { 00:14:47.630 "name": "NewBaseBdev", 00:14:47.630 "aliases": [ 00:14:47.630 "49629351-1f88-4125-a09d-3ca0eda8072f" 00:14:47.630 ], 00:14:47.630 "product_name": "Malloc disk", 00:14:47.630 "block_size": 512, 00:14:47.630 "num_blocks": 65536, 00:14:47.631 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:47.631 "assigned_rate_limits": { 00:14:47.631 "rw_ios_per_sec": 0, 00:14:47.631 "rw_mbytes_per_sec": 0, 00:14:47.631 "r_mbytes_per_sec": 0, 00:14:47.631 "w_mbytes_per_sec": 0 00:14:47.631 }, 00:14:47.631 "claimed": true, 00:14:47.631 "claim_type": "exclusive_write", 00:14:47.631 "zoned": false, 00:14:47.631 "supported_io_types": { 00:14:47.631 "read": true, 00:14:47.631 "write": true, 00:14:47.631 "unmap": true, 00:14:47.631 "flush": true, 00:14:47.631 "reset": true, 00:14:47.631 "nvme_admin": false, 00:14:47.631 "nvme_io": false, 00:14:47.631 "nvme_io_md": false, 00:14:47.631 "write_zeroes": true, 00:14:47.631 "zcopy": true, 00:14:47.631 "get_zone_info": false, 00:14:47.631 "zone_management": false, 00:14:47.631 "zone_append": false, 00:14:47.631 "compare": false, 00:14:47.631 "compare_and_write": false, 00:14:47.631 "abort": true, 00:14:47.631 "seek_hole": false, 00:14:47.631 "seek_data": false, 00:14:47.631 "copy": true, 00:14:47.631 "nvme_iov_md": false 00:14:47.631 }, 00:14:47.631 "memory_domains": [ 00:14:47.631 { 00:14:47.631 "dma_device_id": "system", 00:14:47.631 "dma_device_type": 1 00:14:47.631 }, 00:14:47.631 { 00:14:47.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.631 "dma_device_type": 2 00:14:47.631 } 00:14:47.631 ], 00:14:47.631 "driver_specific": {} 00:14:47.631 } 00:14:47.631 ] 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.631 "name": "Existed_Raid", 00:14:47.631 "uuid": "db3b7afe-2b78-4668-8db4-6b8daadb04e8", 00:14:47.631 "strip_size_kb": 64, 00:14:47.631 "state": "online", 00:14:47.631 "raid_level": "raid5f", 00:14:47.631 "superblock": false, 00:14:47.631 "num_base_bdevs": 3, 00:14:47.631 "num_base_bdevs_discovered": 3, 00:14:47.631 "num_base_bdevs_operational": 3, 00:14:47.631 "base_bdevs_list": [ 00:14:47.631 { 00:14:47.631 "name": "NewBaseBdev", 00:14:47.631 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:47.631 "is_configured": true, 00:14:47.631 "data_offset": 0, 00:14:47.631 "data_size": 65536 00:14:47.631 }, 00:14:47.631 { 00:14:47.631 "name": "BaseBdev2", 00:14:47.631 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:47.631 "is_configured": true, 00:14:47.631 "data_offset": 0, 00:14:47.631 "data_size": 65536 00:14:47.631 }, 00:14:47.631 { 00:14:47.631 "name": "BaseBdev3", 00:14:47.631 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:47.631 "is_configured": true, 00:14:47.631 "data_offset": 0, 00:14:47.631 "data_size": 65536 00:14:47.631 } 00:14:47.631 ] 00:14:47.631 }' 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.631 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.891 [2024-09-28 08:52:25.827569] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.891 "name": "Existed_Raid", 00:14:47.891 "aliases": [ 00:14:47.891 "db3b7afe-2b78-4668-8db4-6b8daadb04e8" 00:14:47.891 ], 00:14:47.891 "product_name": "Raid Volume", 00:14:47.891 "block_size": 512, 00:14:47.891 "num_blocks": 131072, 00:14:47.891 "uuid": "db3b7afe-2b78-4668-8db4-6b8daadb04e8", 00:14:47.891 "assigned_rate_limits": { 00:14:47.891 "rw_ios_per_sec": 0, 00:14:47.891 "rw_mbytes_per_sec": 0, 00:14:47.891 "r_mbytes_per_sec": 0, 00:14:47.891 "w_mbytes_per_sec": 0 00:14:47.891 }, 00:14:47.891 "claimed": false, 00:14:47.891 "zoned": false, 00:14:47.891 "supported_io_types": { 00:14:47.891 "read": true, 00:14:47.891 "write": true, 00:14:47.891 "unmap": false, 00:14:47.891 "flush": false, 00:14:47.891 "reset": true, 00:14:47.891 "nvme_admin": false, 00:14:47.891 "nvme_io": false, 00:14:47.891 "nvme_io_md": false, 00:14:47.891 "write_zeroes": true, 00:14:47.891 "zcopy": false, 00:14:47.891 "get_zone_info": false, 00:14:47.891 "zone_management": false, 00:14:47.891 "zone_append": false, 00:14:47.891 "compare": false, 00:14:47.891 "compare_and_write": false, 00:14:47.891 "abort": false, 00:14:47.891 "seek_hole": false, 00:14:47.891 "seek_data": false, 00:14:47.891 "copy": false, 00:14:47.891 "nvme_iov_md": false 00:14:47.891 }, 00:14:47.891 "driver_specific": { 00:14:47.891 "raid": { 00:14:47.891 "uuid": "db3b7afe-2b78-4668-8db4-6b8daadb04e8", 00:14:47.891 "strip_size_kb": 64, 00:14:47.891 "state": "online", 00:14:47.891 "raid_level": "raid5f", 00:14:47.891 "superblock": false, 00:14:47.891 "num_base_bdevs": 3, 00:14:47.891 "num_base_bdevs_discovered": 3, 00:14:47.891 "num_base_bdevs_operational": 3, 00:14:47.891 "base_bdevs_list": [ 00:14:47.891 { 00:14:47.891 "name": "NewBaseBdev", 00:14:47.891 "uuid": "49629351-1f88-4125-a09d-3ca0eda8072f", 00:14:47.891 "is_configured": true, 00:14:47.891 "data_offset": 0, 00:14:47.891 "data_size": 65536 00:14:47.891 }, 00:14:47.891 { 00:14:47.891 "name": "BaseBdev2", 00:14:47.891 "uuid": "6ae6472b-3d7d-45e0-8d5d-c0c5f20b5d32", 00:14:47.891 "is_configured": true, 00:14:47.891 "data_offset": 0, 00:14:47.891 "data_size": 65536 00:14:47.891 }, 00:14:47.891 { 00:14:47.891 "name": "BaseBdev3", 00:14:47.891 "uuid": "2ec30549-80f6-44d1-9a27-95d4936eeab6", 00:14:47.891 "is_configured": true, 00:14:47.891 "data_offset": 0, 00:14:47.891 "data_size": 65536 00:14:47.891 } 00:14:47.891 ] 00:14:47.891 } 00:14:47.891 } 00:14:47.891 }' 00:14:47.891 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:48.150 BaseBdev2 00:14:48.150 BaseBdev3' 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.150 08:52:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.150 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.151 [2024-09-28 08:52:26.110973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.151 [2024-09-28 08:52:26.110999] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.151 [2024-09-28 08:52:26.111075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.151 [2024-09-28 08:52:26.111397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.151 [2024-09-28 08:52:26.111412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79877 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79877 ']' 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79877 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.151 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79877 00:14:48.410 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.410 killing process with pid 79877 00:14:48.410 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.410 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79877' 00:14:48.410 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79877 00:14:48.410 [2024-09-28 08:52:26.148323] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.410 08:52:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79877 00:14:48.669 [2024-09-28 08:52:26.462845] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.049 08:52:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:50.049 00:14:50.049 real 0m10.842s 00:14:50.049 user 0m16.783s 00:14:50.049 sys 0m2.199s 00:14:50.049 ************************************ 00:14:50.049 END TEST raid5f_state_function_test 00:14:50.049 ************************************ 00:14:50.049 08:52:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.049 08:52:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.049 08:52:27 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:50.049 08:52:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:50.050 08:52:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.050 08:52:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.050 ************************************ 00:14:50.050 START TEST raid5f_state_function_test_sb 00:14:50.050 ************************************ 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80504 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80504' 00:14:50.050 Process raid pid: 80504 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80504 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80504 ']' 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.050 08:52:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.050 [2024-09-28 08:52:27.977749] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:50.050 [2024-09-28 08:52:27.977962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.310 [2024-09-28 08:52:28.150205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.569 [2024-09-28 08:52:28.395248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.829 [2024-09-28 08:52:28.628723] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.829 [2024-09-28 08:52:28.628755] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.829 [2024-09-28 08:52:28.802740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.829 [2024-09-28 08:52:28.802804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.829 [2024-09-28 08:52:28.802816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.829 [2024-09-28 08:52:28.802828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.829 [2024-09-28 08:52:28.802836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.829 [2024-09-28 08:52:28.802847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.829 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.830 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.830 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.830 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.090 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.090 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.090 "name": "Existed_Raid", 00:14:51.090 "uuid": "1d51a52b-c265-419b-bf5d-e1ba573b935b", 00:14:51.090 "strip_size_kb": 64, 00:14:51.090 "state": "configuring", 00:14:51.090 "raid_level": "raid5f", 00:14:51.090 "superblock": true, 00:14:51.090 "num_base_bdevs": 3, 00:14:51.090 "num_base_bdevs_discovered": 0, 00:14:51.090 "num_base_bdevs_operational": 3, 00:14:51.090 "base_bdevs_list": [ 00:14:51.090 { 00:14:51.090 "name": "BaseBdev1", 00:14:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.090 "is_configured": false, 00:14:51.090 "data_offset": 0, 00:14:51.090 "data_size": 0 00:14:51.090 }, 00:14:51.090 { 00:14:51.090 "name": "BaseBdev2", 00:14:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.090 "is_configured": false, 00:14:51.090 "data_offset": 0, 00:14:51.090 "data_size": 0 00:14:51.090 }, 00:14:51.090 { 00:14:51.090 "name": "BaseBdev3", 00:14:51.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.090 "is_configured": false, 00:14:51.090 "data_offset": 0, 00:14:51.090 "data_size": 0 00:14:51.090 } 00:14:51.090 ] 00:14:51.090 }' 00:14:51.090 08:52:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.090 08:52:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.349 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.349 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.349 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.349 [2024-09-28 08:52:29.253813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.349 [2024-09-28 08:52:29.253927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:51.349 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.350 [2024-09-28 08:52:29.265822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.350 [2024-09-28 08:52:29.265936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.350 [2024-09-28 08:52:29.265970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.350 [2024-09-28 08:52:29.265998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.350 [2024-09-28 08:52:29.266020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.350 [2024-09-28 08:52:29.266046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.350 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.610 [2024-09-28 08:52:29.343818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.610 BaseBdev1 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:51.610 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 [ 00:14:51.611 { 00:14:51.611 "name": "BaseBdev1", 00:14:51.611 "aliases": [ 00:14:51.611 "b3ca7868-89db-4008-b5da-a2dc86f49e5a" 00:14:51.611 ], 00:14:51.611 "product_name": "Malloc disk", 00:14:51.611 "block_size": 512, 00:14:51.611 "num_blocks": 65536, 00:14:51.611 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:51.611 "assigned_rate_limits": { 00:14:51.611 "rw_ios_per_sec": 0, 00:14:51.611 "rw_mbytes_per_sec": 0, 00:14:51.611 "r_mbytes_per_sec": 0, 00:14:51.611 "w_mbytes_per_sec": 0 00:14:51.611 }, 00:14:51.611 "claimed": true, 00:14:51.611 "claim_type": "exclusive_write", 00:14:51.611 "zoned": false, 00:14:51.611 "supported_io_types": { 00:14:51.611 "read": true, 00:14:51.611 "write": true, 00:14:51.611 "unmap": true, 00:14:51.611 "flush": true, 00:14:51.611 "reset": true, 00:14:51.611 "nvme_admin": false, 00:14:51.611 "nvme_io": false, 00:14:51.611 "nvme_io_md": false, 00:14:51.611 "write_zeroes": true, 00:14:51.611 "zcopy": true, 00:14:51.611 "get_zone_info": false, 00:14:51.611 "zone_management": false, 00:14:51.611 "zone_append": false, 00:14:51.611 "compare": false, 00:14:51.611 "compare_and_write": false, 00:14:51.611 "abort": true, 00:14:51.611 "seek_hole": false, 00:14:51.611 "seek_data": false, 00:14:51.611 "copy": true, 00:14:51.611 "nvme_iov_md": false 00:14:51.611 }, 00:14:51.611 "memory_domains": [ 00:14:51.611 { 00:14:51.611 "dma_device_id": "system", 00:14:51.611 "dma_device_type": 1 00:14:51.611 }, 00:14:51.611 { 00:14:51.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.611 "dma_device_type": 2 00:14:51.611 } 00:14:51.611 ], 00:14:51.611 "driver_specific": {} 00:14:51.611 } 00:14:51.611 ] 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.611 "name": "Existed_Raid", 00:14:51.611 "uuid": "637b28ad-2c5b-424a-900f-ac72a50ca435", 00:14:51.611 "strip_size_kb": 64, 00:14:51.611 "state": "configuring", 00:14:51.611 "raid_level": "raid5f", 00:14:51.611 "superblock": true, 00:14:51.611 "num_base_bdevs": 3, 00:14:51.611 "num_base_bdevs_discovered": 1, 00:14:51.611 "num_base_bdevs_operational": 3, 00:14:51.611 "base_bdevs_list": [ 00:14:51.611 { 00:14:51.611 "name": "BaseBdev1", 00:14:51.611 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:51.611 "is_configured": true, 00:14:51.611 "data_offset": 2048, 00:14:51.611 "data_size": 63488 00:14:51.611 }, 00:14:51.611 { 00:14:51.611 "name": "BaseBdev2", 00:14:51.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.611 "is_configured": false, 00:14:51.611 "data_offset": 0, 00:14:51.611 "data_size": 0 00:14:51.611 }, 00:14:51.611 { 00:14:51.611 "name": "BaseBdev3", 00:14:51.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.611 "is_configured": false, 00:14:51.611 "data_offset": 0, 00:14:51.611 "data_size": 0 00:14:51.611 } 00:14:51.611 ] 00:14:51.611 }' 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.611 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.872 [2024-09-28 08:52:29.851109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.872 [2024-09-28 08:52:29.851234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.872 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.872 [2024-09-28 08:52:29.863158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.872 [2024-09-28 08:52:29.865049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.872 [2024-09-28 08:52:29.865155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.872 [2024-09-28 08:52:29.865192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.872 [2024-09-28 08:52:29.865220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.132 "name": "Existed_Raid", 00:14:52.132 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:52.132 "strip_size_kb": 64, 00:14:52.132 "state": "configuring", 00:14:52.132 "raid_level": "raid5f", 00:14:52.132 "superblock": true, 00:14:52.132 "num_base_bdevs": 3, 00:14:52.132 "num_base_bdevs_discovered": 1, 00:14:52.132 "num_base_bdevs_operational": 3, 00:14:52.132 "base_bdevs_list": [ 00:14:52.132 { 00:14:52.132 "name": "BaseBdev1", 00:14:52.132 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:52.132 "is_configured": true, 00:14:52.132 "data_offset": 2048, 00:14:52.132 "data_size": 63488 00:14:52.132 }, 00:14:52.132 { 00:14:52.132 "name": "BaseBdev2", 00:14:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.132 "is_configured": false, 00:14:52.132 "data_offset": 0, 00:14:52.132 "data_size": 0 00:14:52.132 }, 00:14:52.132 { 00:14:52.132 "name": "BaseBdev3", 00:14:52.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.132 "is_configured": false, 00:14:52.132 "data_offset": 0, 00:14:52.132 "data_size": 0 00:14:52.132 } 00:14:52.132 ] 00:14:52.132 }' 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.132 08:52:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.393 [2024-09-28 08:52:30.344491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.393 BaseBdev2 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.393 [ 00:14:52.393 { 00:14:52.393 "name": "BaseBdev2", 00:14:52.393 "aliases": [ 00:14:52.393 "1a7a6953-a57c-4239-a65a-8c5a436297a3" 00:14:52.393 ], 00:14:52.393 "product_name": "Malloc disk", 00:14:52.393 "block_size": 512, 00:14:52.393 "num_blocks": 65536, 00:14:52.393 "uuid": "1a7a6953-a57c-4239-a65a-8c5a436297a3", 00:14:52.393 "assigned_rate_limits": { 00:14:52.393 "rw_ios_per_sec": 0, 00:14:52.393 "rw_mbytes_per_sec": 0, 00:14:52.393 "r_mbytes_per_sec": 0, 00:14:52.393 "w_mbytes_per_sec": 0 00:14:52.393 }, 00:14:52.393 "claimed": true, 00:14:52.393 "claim_type": "exclusive_write", 00:14:52.393 "zoned": false, 00:14:52.393 "supported_io_types": { 00:14:52.393 "read": true, 00:14:52.393 "write": true, 00:14:52.393 "unmap": true, 00:14:52.393 "flush": true, 00:14:52.393 "reset": true, 00:14:52.393 "nvme_admin": false, 00:14:52.393 "nvme_io": false, 00:14:52.393 "nvme_io_md": false, 00:14:52.393 "write_zeroes": true, 00:14:52.393 "zcopy": true, 00:14:52.393 "get_zone_info": false, 00:14:52.393 "zone_management": false, 00:14:52.393 "zone_append": false, 00:14:52.393 "compare": false, 00:14:52.393 "compare_and_write": false, 00:14:52.393 "abort": true, 00:14:52.393 "seek_hole": false, 00:14:52.393 "seek_data": false, 00:14:52.393 "copy": true, 00:14:52.393 "nvme_iov_md": false 00:14:52.393 }, 00:14:52.393 "memory_domains": [ 00:14:52.393 { 00:14:52.393 "dma_device_id": "system", 00:14:52.393 "dma_device_type": 1 00:14:52.393 }, 00:14:52.393 { 00:14:52.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.393 "dma_device_type": 2 00:14:52.393 } 00:14:52.393 ], 00:14:52.393 "driver_specific": {} 00:14:52.393 } 00:14:52.393 ] 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.393 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.654 "name": "Existed_Raid", 00:14:52.654 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:52.654 "strip_size_kb": 64, 00:14:52.654 "state": "configuring", 00:14:52.654 "raid_level": "raid5f", 00:14:52.654 "superblock": true, 00:14:52.654 "num_base_bdevs": 3, 00:14:52.654 "num_base_bdevs_discovered": 2, 00:14:52.654 "num_base_bdevs_operational": 3, 00:14:52.654 "base_bdevs_list": [ 00:14:52.654 { 00:14:52.654 "name": "BaseBdev1", 00:14:52.654 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:52.654 "is_configured": true, 00:14:52.654 "data_offset": 2048, 00:14:52.654 "data_size": 63488 00:14:52.654 }, 00:14:52.654 { 00:14:52.654 "name": "BaseBdev2", 00:14:52.654 "uuid": "1a7a6953-a57c-4239-a65a-8c5a436297a3", 00:14:52.654 "is_configured": true, 00:14:52.654 "data_offset": 2048, 00:14:52.654 "data_size": 63488 00:14:52.654 }, 00:14:52.654 { 00:14:52.654 "name": "BaseBdev3", 00:14:52.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.654 "is_configured": false, 00:14:52.654 "data_offset": 0, 00:14:52.654 "data_size": 0 00:14:52.654 } 00:14:52.654 ] 00:14:52.654 }' 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.654 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.913 [2024-09-28 08:52:30.867383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.913 [2024-09-28 08:52:30.867787] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:52.913 [2024-09-28 08:52:30.867861] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.913 [2024-09-28 08:52:30.868150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:52.913 BaseBdev3 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.913 [2024-09-28 08:52:30.873935] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:52.913 [2024-09-28 08:52:30.874023] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:52.913 [2024-09-28 08:52:30.874249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.913 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.913 [ 00:14:52.913 { 00:14:52.913 "name": "BaseBdev3", 00:14:52.913 "aliases": [ 00:14:52.913 "c76449ac-78a0-4166-b43f-2905f1f8f68c" 00:14:52.913 ], 00:14:52.913 "product_name": "Malloc disk", 00:14:52.913 "block_size": 512, 00:14:52.913 "num_blocks": 65536, 00:14:52.913 "uuid": "c76449ac-78a0-4166-b43f-2905f1f8f68c", 00:14:52.913 "assigned_rate_limits": { 00:14:52.913 "rw_ios_per_sec": 0, 00:14:52.913 "rw_mbytes_per_sec": 0, 00:14:52.913 "r_mbytes_per_sec": 0, 00:14:52.913 "w_mbytes_per_sec": 0 00:14:52.913 }, 00:14:52.913 "claimed": true, 00:14:52.913 "claim_type": "exclusive_write", 00:14:52.913 "zoned": false, 00:14:52.913 "supported_io_types": { 00:14:52.913 "read": true, 00:14:52.913 "write": true, 00:14:52.913 "unmap": true, 00:14:52.913 "flush": true, 00:14:52.913 "reset": true, 00:14:52.913 "nvme_admin": false, 00:14:52.913 "nvme_io": false, 00:14:52.913 "nvme_io_md": false, 00:14:52.913 "write_zeroes": true, 00:14:52.913 "zcopy": true, 00:14:52.913 "get_zone_info": false, 00:14:52.913 "zone_management": false, 00:14:52.913 "zone_append": false, 00:14:52.913 "compare": false, 00:14:52.913 "compare_and_write": false, 00:14:52.913 "abort": true, 00:14:52.913 "seek_hole": false, 00:14:52.913 "seek_data": false, 00:14:52.913 "copy": true, 00:14:52.913 "nvme_iov_md": false 00:14:52.913 }, 00:14:52.913 "memory_domains": [ 00:14:52.913 { 00:14:52.913 "dma_device_id": "system", 00:14:52.913 "dma_device_type": 1 00:14:52.913 }, 00:14:53.174 { 00:14:53.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.174 "dma_device_type": 2 00:14:53.174 } 00:14:53.174 ], 00:14:53.174 "driver_specific": {} 00:14:53.174 } 00:14:53.174 ] 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.174 "name": "Existed_Raid", 00:14:53.174 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:53.174 "strip_size_kb": 64, 00:14:53.174 "state": "online", 00:14:53.174 "raid_level": "raid5f", 00:14:53.174 "superblock": true, 00:14:53.174 "num_base_bdevs": 3, 00:14:53.174 "num_base_bdevs_discovered": 3, 00:14:53.174 "num_base_bdevs_operational": 3, 00:14:53.174 "base_bdevs_list": [ 00:14:53.174 { 00:14:53.174 "name": "BaseBdev1", 00:14:53.174 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:53.174 "is_configured": true, 00:14:53.174 "data_offset": 2048, 00:14:53.174 "data_size": 63488 00:14:53.174 }, 00:14:53.174 { 00:14:53.174 "name": "BaseBdev2", 00:14:53.174 "uuid": "1a7a6953-a57c-4239-a65a-8c5a436297a3", 00:14:53.174 "is_configured": true, 00:14:53.174 "data_offset": 2048, 00:14:53.174 "data_size": 63488 00:14:53.174 }, 00:14:53.174 { 00:14:53.174 "name": "BaseBdev3", 00:14:53.174 "uuid": "c76449ac-78a0-4166-b43f-2905f1f8f68c", 00:14:53.174 "is_configured": true, 00:14:53.174 "data_offset": 2048, 00:14:53.174 "data_size": 63488 00:14:53.174 } 00:14:53.174 ] 00:14:53.174 }' 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.174 08:52:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.434 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.435 [2024-09-28 08:52:31.339496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.435 "name": "Existed_Raid", 00:14:53.435 "aliases": [ 00:14:53.435 "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e" 00:14:53.435 ], 00:14:53.435 "product_name": "Raid Volume", 00:14:53.435 "block_size": 512, 00:14:53.435 "num_blocks": 126976, 00:14:53.435 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:53.435 "assigned_rate_limits": { 00:14:53.435 "rw_ios_per_sec": 0, 00:14:53.435 "rw_mbytes_per_sec": 0, 00:14:53.435 "r_mbytes_per_sec": 0, 00:14:53.435 "w_mbytes_per_sec": 0 00:14:53.435 }, 00:14:53.435 "claimed": false, 00:14:53.435 "zoned": false, 00:14:53.435 "supported_io_types": { 00:14:53.435 "read": true, 00:14:53.435 "write": true, 00:14:53.435 "unmap": false, 00:14:53.435 "flush": false, 00:14:53.435 "reset": true, 00:14:53.435 "nvme_admin": false, 00:14:53.435 "nvme_io": false, 00:14:53.435 "nvme_io_md": false, 00:14:53.435 "write_zeroes": true, 00:14:53.435 "zcopy": false, 00:14:53.435 "get_zone_info": false, 00:14:53.435 "zone_management": false, 00:14:53.435 "zone_append": false, 00:14:53.435 "compare": false, 00:14:53.435 "compare_and_write": false, 00:14:53.435 "abort": false, 00:14:53.435 "seek_hole": false, 00:14:53.435 "seek_data": false, 00:14:53.435 "copy": false, 00:14:53.435 "nvme_iov_md": false 00:14:53.435 }, 00:14:53.435 "driver_specific": { 00:14:53.435 "raid": { 00:14:53.435 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:53.435 "strip_size_kb": 64, 00:14:53.435 "state": "online", 00:14:53.435 "raid_level": "raid5f", 00:14:53.435 "superblock": true, 00:14:53.435 "num_base_bdevs": 3, 00:14:53.435 "num_base_bdevs_discovered": 3, 00:14:53.435 "num_base_bdevs_operational": 3, 00:14:53.435 "base_bdevs_list": [ 00:14:53.435 { 00:14:53.435 "name": "BaseBdev1", 00:14:53.435 "uuid": "b3ca7868-89db-4008-b5da-a2dc86f49e5a", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 }, 00:14:53.435 { 00:14:53.435 "name": "BaseBdev2", 00:14:53.435 "uuid": "1a7a6953-a57c-4239-a65a-8c5a436297a3", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 }, 00:14:53.435 { 00:14:53.435 "name": "BaseBdev3", 00:14:53.435 "uuid": "c76449ac-78a0-4166-b43f-2905f1f8f68c", 00:14:53.435 "is_configured": true, 00:14:53.435 "data_offset": 2048, 00:14:53.435 "data_size": 63488 00:14:53.435 } 00:14:53.435 ] 00:14:53.435 } 00:14:53.435 } 00:14:53.435 }' 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:53.435 BaseBdev2 00:14:53.435 BaseBdev3' 00:14:53.435 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.695 [2024-09-28 08:52:31.578904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.695 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.955 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.955 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.955 "name": "Existed_Raid", 00:14:53.955 "uuid": "e690184c-3c4d-46b3-9514-6f1a7d6e0b5e", 00:14:53.955 "strip_size_kb": 64, 00:14:53.955 "state": "online", 00:14:53.955 "raid_level": "raid5f", 00:14:53.955 "superblock": true, 00:14:53.955 "num_base_bdevs": 3, 00:14:53.955 "num_base_bdevs_discovered": 2, 00:14:53.955 "num_base_bdevs_operational": 2, 00:14:53.955 "base_bdevs_list": [ 00:14:53.955 { 00:14:53.955 "name": null, 00:14:53.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.955 "is_configured": false, 00:14:53.955 "data_offset": 0, 00:14:53.955 "data_size": 63488 00:14:53.955 }, 00:14:53.955 { 00:14:53.955 "name": "BaseBdev2", 00:14:53.955 "uuid": "1a7a6953-a57c-4239-a65a-8c5a436297a3", 00:14:53.955 "is_configured": true, 00:14:53.955 "data_offset": 2048, 00:14:53.955 "data_size": 63488 00:14:53.955 }, 00:14:53.955 { 00:14:53.955 "name": "BaseBdev3", 00:14:53.955 "uuid": "c76449ac-78a0-4166-b43f-2905f1f8f68c", 00:14:53.955 "is_configured": true, 00:14:53.955 "data_offset": 2048, 00:14:53.955 "data_size": 63488 00:14:53.955 } 00:14:53.955 ] 00:14:53.955 }' 00:14:53.955 08:52:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.955 08:52:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.215 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.215 [2024-09-28 08:52:32.154962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.215 [2024-09-28 08:52:32.155132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.475 [2024-09-28 08:52:32.245294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.475 [2024-09-28 08:52:32.301215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.475 [2024-09-28 08:52:32.301269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.475 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 BaseBdev2 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 [ 00:14:54.736 { 00:14:54.736 "name": "BaseBdev2", 00:14:54.736 "aliases": [ 00:14:54.736 "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00" 00:14:54.736 ], 00:14:54.736 "product_name": "Malloc disk", 00:14:54.736 "block_size": 512, 00:14:54.736 "num_blocks": 65536, 00:14:54.736 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:54.736 "assigned_rate_limits": { 00:14:54.736 "rw_ios_per_sec": 0, 00:14:54.736 "rw_mbytes_per_sec": 0, 00:14:54.736 "r_mbytes_per_sec": 0, 00:14:54.736 "w_mbytes_per_sec": 0 00:14:54.736 }, 00:14:54.736 "claimed": false, 00:14:54.736 "zoned": false, 00:14:54.736 "supported_io_types": { 00:14:54.736 "read": true, 00:14:54.736 "write": true, 00:14:54.736 "unmap": true, 00:14:54.736 "flush": true, 00:14:54.736 "reset": true, 00:14:54.736 "nvme_admin": false, 00:14:54.736 "nvme_io": false, 00:14:54.736 "nvme_io_md": false, 00:14:54.736 "write_zeroes": true, 00:14:54.736 "zcopy": true, 00:14:54.736 "get_zone_info": false, 00:14:54.736 "zone_management": false, 00:14:54.736 "zone_append": false, 00:14:54.736 "compare": false, 00:14:54.736 "compare_and_write": false, 00:14:54.736 "abort": true, 00:14:54.736 "seek_hole": false, 00:14:54.736 "seek_data": false, 00:14:54.736 "copy": true, 00:14:54.736 "nvme_iov_md": false 00:14:54.736 }, 00:14:54.736 "memory_domains": [ 00:14:54.736 { 00:14:54.736 "dma_device_id": "system", 00:14:54.736 "dma_device_type": 1 00:14:54.736 }, 00:14:54.736 { 00:14:54.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.736 "dma_device_type": 2 00:14:54.736 } 00:14:54.736 ], 00:14:54.736 "driver_specific": {} 00:14:54.736 } 00:14:54.736 ] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 BaseBdev3 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 [ 00:14:54.736 { 00:14:54.736 "name": "BaseBdev3", 00:14:54.736 "aliases": [ 00:14:54.736 "18f500a8-2d25-4b91-99a8-6c604760e886" 00:14:54.736 ], 00:14:54.736 "product_name": "Malloc disk", 00:14:54.736 "block_size": 512, 00:14:54.736 "num_blocks": 65536, 00:14:54.736 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:54.736 "assigned_rate_limits": { 00:14:54.736 "rw_ios_per_sec": 0, 00:14:54.736 "rw_mbytes_per_sec": 0, 00:14:54.736 "r_mbytes_per_sec": 0, 00:14:54.736 "w_mbytes_per_sec": 0 00:14:54.736 }, 00:14:54.736 "claimed": false, 00:14:54.736 "zoned": false, 00:14:54.736 "supported_io_types": { 00:14:54.736 "read": true, 00:14:54.736 "write": true, 00:14:54.736 "unmap": true, 00:14:54.736 "flush": true, 00:14:54.736 "reset": true, 00:14:54.736 "nvme_admin": false, 00:14:54.736 "nvme_io": false, 00:14:54.736 "nvme_io_md": false, 00:14:54.736 "write_zeroes": true, 00:14:54.736 "zcopy": true, 00:14:54.736 "get_zone_info": false, 00:14:54.736 "zone_management": false, 00:14:54.736 "zone_append": false, 00:14:54.736 "compare": false, 00:14:54.736 "compare_and_write": false, 00:14:54.736 "abort": true, 00:14:54.736 "seek_hole": false, 00:14:54.736 "seek_data": false, 00:14:54.736 "copy": true, 00:14:54.736 "nvme_iov_md": false 00:14:54.736 }, 00:14:54.736 "memory_domains": [ 00:14:54.736 { 00:14:54.736 "dma_device_id": "system", 00:14:54.736 "dma_device_type": 1 00:14:54.736 }, 00:14:54.736 { 00:14:54.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.736 "dma_device_type": 2 00:14:54.736 } 00:14:54.736 ], 00:14:54.736 "driver_specific": {} 00:14:54.736 } 00:14:54.736 ] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.736 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.736 [2024-09-28 08:52:32.600045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.736 [2024-09-28 08:52:32.600104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.737 [2024-09-28 08:52:32.600131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.737 [2024-09-28 08:52:32.601917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.737 "name": "Existed_Raid", 00:14:54.737 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:54.737 "strip_size_kb": 64, 00:14:54.737 "state": "configuring", 00:14:54.737 "raid_level": "raid5f", 00:14:54.737 "superblock": true, 00:14:54.737 "num_base_bdevs": 3, 00:14:54.737 "num_base_bdevs_discovered": 2, 00:14:54.737 "num_base_bdevs_operational": 3, 00:14:54.737 "base_bdevs_list": [ 00:14:54.737 { 00:14:54.737 "name": "BaseBdev1", 00:14:54.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.737 "is_configured": false, 00:14:54.737 "data_offset": 0, 00:14:54.737 "data_size": 0 00:14:54.737 }, 00:14:54.737 { 00:14:54.737 "name": "BaseBdev2", 00:14:54.737 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:54.737 "is_configured": true, 00:14:54.737 "data_offset": 2048, 00:14:54.737 "data_size": 63488 00:14:54.737 }, 00:14:54.737 { 00:14:54.737 "name": "BaseBdev3", 00:14:54.737 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:54.737 "is_configured": true, 00:14:54.737 "data_offset": 2048, 00:14:54.737 "data_size": 63488 00:14:54.737 } 00:14:54.737 ] 00:14:54.737 }' 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.737 08:52:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.306 [2024-09-28 08:52:33.043280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.306 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.306 "name": "Existed_Raid", 00:14:55.306 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:55.306 "strip_size_kb": 64, 00:14:55.306 "state": "configuring", 00:14:55.306 "raid_level": "raid5f", 00:14:55.306 "superblock": true, 00:14:55.306 "num_base_bdevs": 3, 00:14:55.306 "num_base_bdevs_discovered": 1, 00:14:55.306 "num_base_bdevs_operational": 3, 00:14:55.306 "base_bdevs_list": [ 00:14:55.306 { 00:14:55.306 "name": "BaseBdev1", 00:14:55.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.306 "is_configured": false, 00:14:55.306 "data_offset": 0, 00:14:55.306 "data_size": 0 00:14:55.306 }, 00:14:55.306 { 00:14:55.306 "name": null, 00:14:55.306 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:55.306 "is_configured": false, 00:14:55.306 "data_offset": 0, 00:14:55.306 "data_size": 63488 00:14:55.306 }, 00:14:55.306 { 00:14:55.306 "name": "BaseBdev3", 00:14:55.307 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:55.307 "is_configured": true, 00:14:55.307 "data_offset": 2048, 00:14:55.307 "data_size": 63488 00:14:55.307 } 00:14:55.307 ] 00:14:55.307 }' 00:14:55.307 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.307 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.570 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.570 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.570 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.570 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.570 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 [2024-09-28 08:52:33.613536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.832 BaseBdev1 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 [ 00:14:55.832 { 00:14:55.832 "name": "BaseBdev1", 00:14:55.832 "aliases": [ 00:14:55.832 "c5b51008-e6db-446f-a3b9-5fe84e60e305" 00:14:55.832 ], 00:14:55.832 "product_name": "Malloc disk", 00:14:55.832 "block_size": 512, 00:14:55.832 "num_blocks": 65536, 00:14:55.832 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:55.832 "assigned_rate_limits": { 00:14:55.832 "rw_ios_per_sec": 0, 00:14:55.832 "rw_mbytes_per_sec": 0, 00:14:55.832 "r_mbytes_per_sec": 0, 00:14:55.832 "w_mbytes_per_sec": 0 00:14:55.832 }, 00:14:55.832 "claimed": true, 00:14:55.832 "claim_type": "exclusive_write", 00:14:55.832 "zoned": false, 00:14:55.832 "supported_io_types": { 00:14:55.832 "read": true, 00:14:55.832 "write": true, 00:14:55.832 "unmap": true, 00:14:55.832 "flush": true, 00:14:55.832 "reset": true, 00:14:55.832 "nvme_admin": false, 00:14:55.832 "nvme_io": false, 00:14:55.832 "nvme_io_md": false, 00:14:55.832 "write_zeroes": true, 00:14:55.832 "zcopy": true, 00:14:55.832 "get_zone_info": false, 00:14:55.832 "zone_management": false, 00:14:55.832 "zone_append": false, 00:14:55.832 "compare": false, 00:14:55.832 "compare_and_write": false, 00:14:55.832 "abort": true, 00:14:55.832 "seek_hole": false, 00:14:55.832 "seek_data": false, 00:14:55.832 "copy": true, 00:14:55.832 "nvme_iov_md": false 00:14:55.832 }, 00:14:55.832 "memory_domains": [ 00:14:55.832 { 00:14:55.832 "dma_device_id": "system", 00:14:55.832 "dma_device_type": 1 00:14:55.832 }, 00:14:55.832 { 00:14:55.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.832 "dma_device_type": 2 00:14:55.832 } 00:14:55.832 ], 00:14:55.832 "driver_specific": {} 00:14:55.832 } 00:14:55.832 ] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.832 "name": "Existed_Raid", 00:14:55.832 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:55.832 "strip_size_kb": 64, 00:14:55.832 "state": "configuring", 00:14:55.832 "raid_level": "raid5f", 00:14:55.832 "superblock": true, 00:14:55.832 "num_base_bdevs": 3, 00:14:55.832 "num_base_bdevs_discovered": 2, 00:14:55.832 "num_base_bdevs_operational": 3, 00:14:55.832 "base_bdevs_list": [ 00:14:55.832 { 00:14:55.832 "name": "BaseBdev1", 00:14:55.832 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:55.832 "is_configured": true, 00:14:55.832 "data_offset": 2048, 00:14:55.832 "data_size": 63488 00:14:55.832 }, 00:14:55.832 { 00:14:55.832 "name": null, 00:14:55.832 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:55.832 "is_configured": false, 00:14:55.832 "data_offset": 0, 00:14:55.832 "data_size": 63488 00:14:55.832 }, 00:14:55.832 { 00:14:55.832 "name": "BaseBdev3", 00:14:55.832 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:55.832 "is_configured": true, 00:14:55.832 "data_offset": 2048, 00:14:55.832 "data_size": 63488 00:14:55.832 } 00:14:55.832 ] 00:14:55.832 }' 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.832 08:52:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 [2024-09-28 08:52:34.144692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.400 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.401 "name": "Existed_Raid", 00:14:56.401 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:56.401 "strip_size_kb": 64, 00:14:56.401 "state": "configuring", 00:14:56.401 "raid_level": "raid5f", 00:14:56.401 "superblock": true, 00:14:56.401 "num_base_bdevs": 3, 00:14:56.401 "num_base_bdevs_discovered": 1, 00:14:56.401 "num_base_bdevs_operational": 3, 00:14:56.401 "base_bdevs_list": [ 00:14:56.401 { 00:14:56.401 "name": "BaseBdev1", 00:14:56.401 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:56.401 "is_configured": true, 00:14:56.401 "data_offset": 2048, 00:14:56.401 "data_size": 63488 00:14:56.401 }, 00:14:56.401 { 00:14:56.401 "name": null, 00:14:56.401 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:56.401 "is_configured": false, 00:14:56.401 "data_offset": 0, 00:14:56.401 "data_size": 63488 00:14:56.401 }, 00:14:56.401 { 00:14:56.401 "name": null, 00:14:56.401 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:56.401 "is_configured": false, 00:14:56.401 "data_offset": 0, 00:14:56.401 "data_size": 63488 00:14:56.401 } 00:14:56.401 ] 00:14:56.401 }' 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.401 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.660 [2024-09-28 08:52:34.603861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.660 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.919 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.919 "name": "Existed_Raid", 00:14:56.919 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:56.919 "strip_size_kb": 64, 00:14:56.919 "state": "configuring", 00:14:56.919 "raid_level": "raid5f", 00:14:56.919 "superblock": true, 00:14:56.919 "num_base_bdevs": 3, 00:14:56.919 "num_base_bdevs_discovered": 2, 00:14:56.919 "num_base_bdevs_operational": 3, 00:14:56.919 "base_bdevs_list": [ 00:14:56.919 { 00:14:56.919 "name": "BaseBdev1", 00:14:56.919 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:56.919 "is_configured": true, 00:14:56.919 "data_offset": 2048, 00:14:56.919 "data_size": 63488 00:14:56.919 }, 00:14:56.919 { 00:14:56.919 "name": null, 00:14:56.919 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:56.919 "is_configured": false, 00:14:56.919 "data_offset": 0, 00:14:56.919 "data_size": 63488 00:14:56.919 }, 00:14:56.919 { 00:14:56.919 "name": "BaseBdev3", 00:14:56.919 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:56.919 "is_configured": true, 00:14:56.919 "data_offset": 2048, 00:14:56.919 "data_size": 63488 00:14:56.919 } 00:14:56.919 ] 00:14:56.919 }' 00:14:56.919 08:52:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.919 08:52:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.178 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.178 [2024-09-28 08:52:35.103239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.437 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.437 "name": "Existed_Raid", 00:14:57.437 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:57.437 "strip_size_kb": 64, 00:14:57.437 "state": "configuring", 00:14:57.437 "raid_level": "raid5f", 00:14:57.437 "superblock": true, 00:14:57.437 "num_base_bdevs": 3, 00:14:57.437 "num_base_bdevs_discovered": 1, 00:14:57.437 "num_base_bdevs_operational": 3, 00:14:57.437 "base_bdevs_list": [ 00:14:57.437 { 00:14:57.437 "name": null, 00:14:57.437 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:57.437 "is_configured": false, 00:14:57.437 "data_offset": 0, 00:14:57.437 "data_size": 63488 00:14:57.437 }, 00:14:57.437 { 00:14:57.437 "name": null, 00:14:57.437 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:57.437 "is_configured": false, 00:14:57.437 "data_offset": 0, 00:14:57.437 "data_size": 63488 00:14:57.437 }, 00:14:57.437 { 00:14:57.437 "name": "BaseBdev3", 00:14:57.438 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:57.438 "is_configured": true, 00:14:57.438 "data_offset": 2048, 00:14:57.438 "data_size": 63488 00:14:57.438 } 00:14:57.438 ] 00:14:57.438 }' 00:14:57.438 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.438 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.698 [2024-09-28 08:52:35.674132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.698 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.957 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.957 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.957 "name": "Existed_Raid", 00:14:57.957 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:57.957 "strip_size_kb": 64, 00:14:57.957 "state": "configuring", 00:14:57.957 "raid_level": "raid5f", 00:14:57.957 "superblock": true, 00:14:57.957 "num_base_bdevs": 3, 00:14:57.957 "num_base_bdevs_discovered": 2, 00:14:57.957 "num_base_bdevs_operational": 3, 00:14:57.957 "base_bdevs_list": [ 00:14:57.957 { 00:14:57.957 "name": null, 00:14:57.957 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:57.957 "is_configured": false, 00:14:57.957 "data_offset": 0, 00:14:57.957 "data_size": 63488 00:14:57.957 }, 00:14:57.957 { 00:14:57.957 "name": "BaseBdev2", 00:14:57.957 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:57.957 "is_configured": true, 00:14:57.957 "data_offset": 2048, 00:14:57.957 "data_size": 63488 00:14:57.957 }, 00:14:57.957 { 00:14:57.957 "name": "BaseBdev3", 00:14:57.957 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:57.957 "is_configured": true, 00:14:57.957 "data_offset": 2048, 00:14:57.957 "data_size": 63488 00:14:57.957 } 00:14:57.957 ] 00:14:57.957 }' 00:14:57.957 08:52:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.957 08:52:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.216 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c5b51008-e6db-446f-a3b9-5fe84e60e305 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.476 [2024-09-28 08:52:36.280833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.476 [2024-09-28 08:52:36.281165] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.476 [2024-09-28 08:52:36.281231] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:58.476 [2024-09-28 08:52:36.281520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:58.476 NewBaseBdev 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.476 [2024-09-28 08:52:36.286854] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.476 [2024-09-28 08:52:36.286927] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:58.476 [2024-09-28 08:52:36.287160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.476 [ 00:14:58.476 { 00:14:58.476 "name": "NewBaseBdev", 00:14:58.476 "aliases": [ 00:14:58.476 "c5b51008-e6db-446f-a3b9-5fe84e60e305" 00:14:58.476 ], 00:14:58.476 "product_name": "Malloc disk", 00:14:58.476 "block_size": 512, 00:14:58.476 "num_blocks": 65536, 00:14:58.476 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:58.476 "assigned_rate_limits": { 00:14:58.476 "rw_ios_per_sec": 0, 00:14:58.476 "rw_mbytes_per_sec": 0, 00:14:58.476 "r_mbytes_per_sec": 0, 00:14:58.476 "w_mbytes_per_sec": 0 00:14:58.476 }, 00:14:58.476 "claimed": true, 00:14:58.476 "claim_type": "exclusive_write", 00:14:58.476 "zoned": false, 00:14:58.476 "supported_io_types": { 00:14:58.476 "read": true, 00:14:58.476 "write": true, 00:14:58.476 "unmap": true, 00:14:58.476 "flush": true, 00:14:58.476 "reset": true, 00:14:58.476 "nvme_admin": false, 00:14:58.476 "nvme_io": false, 00:14:58.476 "nvme_io_md": false, 00:14:58.476 "write_zeroes": true, 00:14:58.476 "zcopy": true, 00:14:58.476 "get_zone_info": false, 00:14:58.476 "zone_management": false, 00:14:58.476 "zone_append": false, 00:14:58.476 "compare": false, 00:14:58.476 "compare_and_write": false, 00:14:58.476 "abort": true, 00:14:58.476 "seek_hole": false, 00:14:58.476 "seek_data": false, 00:14:58.476 "copy": true, 00:14:58.476 "nvme_iov_md": false 00:14:58.476 }, 00:14:58.476 "memory_domains": [ 00:14:58.476 { 00:14:58.476 "dma_device_id": "system", 00:14:58.476 "dma_device_type": 1 00:14:58.476 }, 00:14:58.476 { 00:14:58.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.476 "dma_device_type": 2 00:14:58.476 } 00:14:58.476 ], 00:14:58.476 "driver_specific": {} 00:14:58.476 } 00:14:58.476 ] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.476 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.477 "name": "Existed_Raid", 00:14:58.477 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:58.477 "strip_size_kb": 64, 00:14:58.477 "state": "online", 00:14:58.477 "raid_level": "raid5f", 00:14:58.477 "superblock": true, 00:14:58.477 "num_base_bdevs": 3, 00:14:58.477 "num_base_bdevs_discovered": 3, 00:14:58.477 "num_base_bdevs_operational": 3, 00:14:58.477 "base_bdevs_list": [ 00:14:58.477 { 00:14:58.477 "name": "NewBaseBdev", 00:14:58.477 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:58.477 "is_configured": true, 00:14:58.477 "data_offset": 2048, 00:14:58.477 "data_size": 63488 00:14:58.477 }, 00:14:58.477 { 00:14:58.477 "name": "BaseBdev2", 00:14:58.477 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:58.477 "is_configured": true, 00:14:58.477 "data_offset": 2048, 00:14:58.477 "data_size": 63488 00:14:58.477 }, 00:14:58.477 { 00:14:58.477 "name": "BaseBdev3", 00:14:58.477 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:58.477 "is_configured": true, 00:14:58.477 "data_offset": 2048, 00:14:58.477 "data_size": 63488 00:14:58.477 } 00:14:58.477 ] 00:14:58.477 }' 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.477 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 [2024-09-28 08:52:36.808480] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.048 "name": "Existed_Raid", 00:14:59.048 "aliases": [ 00:14:59.048 "d2d8d202-d9f3-4ae7-9611-8db416c8fb64" 00:14:59.048 ], 00:14:59.048 "product_name": "Raid Volume", 00:14:59.048 "block_size": 512, 00:14:59.048 "num_blocks": 126976, 00:14:59.048 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:59.048 "assigned_rate_limits": { 00:14:59.048 "rw_ios_per_sec": 0, 00:14:59.048 "rw_mbytes_per_sec": 0, 00:14:59.048 "r_mbytes_per_sec": 0, 00:14:59.048 "w_mbytes_per_sec": 0 00:14:59.048 }, 00:14:59.048 "claimed": false, 00:14:59.048 "zoned": false, 00:14:59.048 "supported_io_types": { 00:14:59.048 "read": true, 00:14:59.048 "write": true, 00:14:59.048 "unmap": false, 00:14:59.048 "flush": false, 00:14:59.048 "reset": true, 00:14:59.048 "nvme_admin": false, 00:14:59.048 "nvme_io": false, 00:14:59.048 "nvme_io_md": false, 00:14:59.048 "write_zeroes": true, 00:14:59.048 "zcopy": false, 00:14:59.048 "get_zone_info": false, 00:14:59.048 "zone_management": false, 00:14:59.048 "zone_append": false, 00:14:59.048 "compare": false, 00:14:59.048 "compare_and_write": false, 00:14:59.048 "abort": false, 00:14:59.048 "seek_hole": false, 00:14:59.048 "seek_data": false, 00:14:59.048 "copy": false, 00:14:59.048 "nvme_iov_md": false 00:14:59.048 }, 00:14:59.048 "driver_specific": { 00:14:59.048 "raid": { 00:14:59.048 "uuid": "d2d8d202-d9f3-4ae7-9611-8db416c8fb64", 00:14:59.048 "strip_size_kb": 64, 00:14:59.048 "state": "online", 00:14:59.048 "raid_level": "raid5f", 00:14:59.048 "superblock": true, 00:14:59.048 "num_base_bdevs": 3, 00:14:59.048 "num_base_bdevs_discovered": 3, 00:14:59.048 "num_base_bdevs_operational": 3, 00:14:59.048 "base_bdevs_list": [ 00:14:59.048 { 00:14:59.048 "name": "NewBaseBdev", 00:14:59.048 "uuid": "c5b51008-e6db-446f-a3b9-5fe84e60e305", 00:14:59.048 "is_configured": true, 00:14:59.048 "data_offset": 2048, 00:14:59.048 "data_size": 63488 00:14:59.048 }, 00:14:59.048 { 00:14:59.048 "name": "BaseBdev2", 00:14:59.048 "uuid": "e77a0dd2-5e37-4fb7-a25c-0377ab8a0b00", 00:14:59.048 "is_configured": true, 00:14:59.048 "data_offset": 2048, 00:14:59.048 "data_size": 63488 00:14:59.048 }, 00:14:59.048 { 00:14:59.048 "name": "BaseBdev3", 00:14:59.048 "uuid": "18f500a8-2d25-4b91-99a8-6c604760e886", 00:14:59.048 "is_configured": true, 00:14:59.048 "data_offset": 2048, 00:14:59.048 "data_size": 63488 00:14:59.048 } 00:14:59.048 ] 00:14:59.048 } 00:14:59.048 } 00:14:59.048 }' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:59.048 BaseBdev2 00:14:59.048 BaseBdev3' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.048 08:52:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.048 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.048 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.048 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.048 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.048 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.308 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.309 [2024-09-28 08:52:37.107760] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.309 [2024-09-28 08:52:37.107788] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.309 [2024-09-28 08:52:37.107855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.309 [2024-09-28 08:52:37.108111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.309 [2024-09-28 08:52:37.108125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80504 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80504 ']' 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80504 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80504 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80504' 00:14:59.309 killing process with pid 80504 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80504 00:14:59.309 [2024-09-28 08:52:37.156418] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.309 08:52:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80504 00:14:59.569 [2024-09-28 08:52:37.434888] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.950 08:52:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:00.950 ************************************ 00:15:00.950 END TEST raid5f_state_function_test_sb 00:15:00.950 ************************************ 00:15:00.950 00:15:00.950 real 0m10.754s 00:15:00.950 user 0m16.943s 00:15:00.950 sys 0m2.099s 00:15:00.950 08:52:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.950 08:52:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.950 08:52:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:00.950 08:52:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:00.950 08:52:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.950 08:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.950 ************************************ 00:15:00.950 START TEST raid5f_superblock_test 00:15:00.950 ************************************ 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:00.950 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81119 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81119 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81119 ']' 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.951 08:52:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.951 [2024-09-28 08:52:38.811041] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:00.951 [2024-09-28 08:52:38.811159] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81119 ] 00:15:01.211 [2024-09-28 08:52:38.978730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.211 [2024-09-28 08:52:39.176893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.472 [2024-09-28 08:52:39.367027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.472 [2024-09-28 08:52:39.367084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 malloc1 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.732 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.732 [2024-09-28 08:52:39.656548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.732 [2024-09-28 08:52:39.656737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.733 [2024-09-28 08:52:39.656789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.733 [2024-09-28 08:52:39.656828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.733 [2024-09-28 08:52:39.658890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.733 [2024-09-28 08:52:39.658981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.733 pt1 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.733 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 malloc2 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 [2024-09-28 08:52:39.746672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.993 [2024-09-28 08:52:39.746801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.993 [2024-09-28 08:52:39.746846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.993 [2024-09-28 08:52:39.746900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.993 [2024-09-28 08:52:39.748972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.993 [2024-09-28 08:52:39.749067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.993 pt2 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 malloc3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 [2024-09-28 08:52:39.799062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:01.993 [2024-09-28 08:52:39.799192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.993 [2024-09-28 08:52:39.799243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.993 [2024-09-28 08:52:39.799280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.993 [2024-09-28 08:52:39.801343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.993 [2024-09-28 08:52:39.801424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:01.993 pt3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 [2024-09-28 08:52:39.811121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.993 [2024-09-28 08:52:39.812968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.993 [2024-09-28 08:52:39.813081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:01.993 [2024-09-28 08:52:39.813295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:01.993 [2024-09-28 08:52:39.813348] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.993 [2024-09-28 08:52:39.813600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:01.993 [2024-09-28 08:52:39.818914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:01.993 [2024-09-28 08:52:39.818976] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:01.993 [2024-09-28 08:52:39.819228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.993 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.994 "name": "raid_bdev1", 00:15:01.994 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:01.994 "strip_size_kb": 64, 00:15:01.994 "state": "online", 00:15:01.994 "raid_level": "raid5f", 00:15:01.994 "superblock": true, 00:15:01.994 "num_base_bdevs": 3, 00:15:01.994 "num_base_bdevs_discovered": 3, 00:15:01.994 "num_base_bdevs_operational": 3, 00:15:01.994 "base_bdevs_list": [ 00:15:01.994 { 00:15:01.994 "name": "pt1", 00:15:01.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.994 "is_configured": true, 00:15:01.994 "data_offset": 2048, 00:15:01.994 "data_size": 63488 00:15:01.994 }, 00:15:01.994 { 00:15:01.994 "name": "pt2", 00:15:01.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.994 "is_configured": true, 00:15:01.994 "data_offset": 2048, 00:15:01.994 "data_size": 63488 00:15:01.994 }, 00:15:01.994 { 00:15:01.994 "name": "pt3", 00:15:01.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.994 "is_configured": true, 00:15:01.994 "data_offset": 2048, 00:15:01.994 "data_size": 63488 00:15:01.994 } 00:15:01.994 ] 00:15:01.994 }' 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.994 08:52:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.253 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.254 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.254 [2024-09-28 08:52:40.228958] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.514 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.514 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.514 "name": "raid_bdev1", 00:15:02.514 "aliases": [ 00:15:02.514 "f867fa88-43b6-49f0-b63b-9bfcba68b083" 00:15:02.514 ], 00:15:02.514 "product_name": "Raid Volume", 00:15:02.514 "block_size": 512, 00:15:02.514 "num_blocks": 126976, 00:15:02.514 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:02.514 "assigned_rate_limits": { 00:15:02.514 "rw_ios_per_sec": 0, 00:15:02.514 "rw_mbytes_per_sec": 0, 00:15:02.514 "r_mbytes_per_sec": 0, 00:15:02.514 "w_mbytes_per_sec": 0 00:15:02.514 }, 00:15:02.514 "claimed": false, 00:15:02.514 "zoned": false, 00:15:02.514 "supported_io_types": { 00:15:02.514 "read": true, 00:15:02.514 "write": true, 00:15:02.514 "unmap": false, 00:15:02.514 "flush": false, 00:15:02.514 "reset": true, 00:15:02.514 "nvme_admin": false, 00:15:02.514 "nvme_io": false, 00:15:02.514 "nvme_io_md": false, 00:15:02.514 "write_zeroes": true, 00:15:02.514 "zcopy": false, 00:15:02.514 "get_zone_info": false, 00:15:02.514 "zone_management": false, 00:15:02.514 "zone_append": false, 00:15:02.514 "compare": false, 00:15:02.514 "compare_and_write": false, 00:15:02.514 "abort": false, 00:15:02.514 "seek_hole": false, 00:15:02.514 "seek_data": false, 00:15:02.514 "copy": false, 00:15:02.514 "nvme_iov_md": false 00:15:02.514 }, 00:15:02.514 "driver_specific": { 00:15:02.514 "raid": { 00:15:02.514 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:02.514 "strip_size_kb": 64, 00:15:02.514 "state": "online", 00:15:02.514 "raid_level": "raid5f", 00:15:02.514 "superblock": true, 00:15:02.514 "num_base_bdevs": 3, 00:15:02.514 "num_base_bdevs_discovered": 3, 00:15:02.514 "num_base_bdevs_operational": 3, 00:15:02.514 "base_bdevs_list": [ 00:15:02.514 { 00:15:02.514 "name": "pt1", 00:15:02.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.514 "is_configured": true, 00:15:02.514 "data_offset": 2048, 00:15:02.514 "data_size": 63488 00:15:02.514 }, 00:15:02.514 { 00:15:02.514 "name": "pt2", 00:15:02.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.514 "is_configured": true, 00:15:02.514 "data_offset": 2048, 00:15:02.514 "data_size": 63488 00:15:02.514 }, 00:15:02.514 { 00:15:02.514 "name": "pt3", 00:15:02.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.514 "is_configured": true, 00:15:02.514 "data_offset": 2048, 00:15:02.514 "data_size": 63488 00:15:02.514 } 00:15:02.514 ] 00:15:02.514 } 00:15:02.514 } 00:15:02.514 }' 00:15:02.514 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:02.515 pt2 00:15:02.515 pt3' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:02.515 [2024-09-28 08:52:40.484503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.515 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f867fa88-43b6-49f0-b63b-9bfcba68b083 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f867fa88-43b6-49f0-b63b-9bfcba68b083 ']' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 [2024-09-28 08:52:40.532256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.776 [2024-09-28 08:52:40.532286] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.776 [2024-09-28 08:52:40.532349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.776 [2024-09-28 08:52:40.532410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.776 [2024-09-28 08:52:40.532420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 [2024-09-28 08:52:40.688021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:02.776 [2024-09-28 08:52:40.689844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:02.776 [2024-09-28 08:52:40.689898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:02.776 [2024-09-28 08:52:40.689945] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:02.776 [2024-09-28 08:52:40.689991] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:02.776 [2024-09-28 08:52:40.690012] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:02.776 [2024-09-28 08:52:40.690031] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.776 [2024-09-28 08:52:40.690042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:02.776 request: 00:15:02.776 { 00:15:02.776 "name": "raid_bdev1", 00:15:02.776 "raid_level": "raid5f", 00:15:02.776 "base_bdevs": [ 00:15:02.776 "malloc1", 00:15:02.776 "malloc2", 00:15:02.776 "malloc3" 00:15:02.776 ], 00:15:02.776 "strip_size_kb": 64, 00:15:02.776 "superblock": false, 00:15:02.776 "method": "bdev_raid_create", 00:15:02.776 "req_id": 1 00:15:02.776 } 00:15:02.776 Got JSON-RPC error response 00:15:02.776 response: 00:15:02.776 { 00:15:02.776 "code": -17, 00:15:02.776 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:02.776 } 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.776 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 [2024-09-28 08:52:40.755870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.777 [2024-09-28 08:52:40.755972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.777 [2024-09-28 08:52:40.756027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:02.777 [2024-09-28 08:52:40.756059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.777 [2024-09-28 08:52:40.758098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.777 [2024-09-28 08:52:40.758175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.777 [2024-09-28 08:52:40.758269] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:02.777 [2024-09-28 08:52:40.758351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.777 pt1 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.777 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.036 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.036 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.036 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.036 "name": "raid_bdev1", 00:15:03.036 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:03.036 "strip_size_kb": 64, 00:15:03.036 "state": "configuring", 00:15:03.036 "raid_level": "raid5f", 00:15:03.036 "superblock": true, 00:15:03.036 "num_base_bdevs": 3, 00:15:03.036 "num_base_bdevs_discovered": 1, 00:15:03.036 "num_base_bdevs_operational": 3, 00:15:03.036 "base_bdevs_list": [ 00:15:03.037 { 00:15:03.037 "name": "pt1", 00:15:03.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.037 "is_configured": true, 00:15:03.037 "data_offset": 2048, 00:15:03.037 "data_size": 63488 00:15:03.037 }, 00:15:03.037 { 00:15:03.037 "name": null, 00:15:03.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.037 "is_configured": false, 00:15:03.037 "data_offset": 2048, 00:15:03.037 "data_size": 63488 00:15:03.037 }, 00:15:03.037 { 00:15:03.037 "name": null, 00:15:03.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.037 "is_configured": false, 00:15:03.037 "data_offset": 2048, 00:15:03.037 "data_size": 63488 00:15:03.037 } 00:15:03.037 ] 00:15:03.037 }' 00:15:03.037 08:52:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.037 08:52:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.296 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:03.296 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.296 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.296 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.296 [2024-09-28 08:52:41.227182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.296 [2024-09-28 08:52:41.227258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.296 [2024-09-28 08:52:41.227283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:03.296 [2024-09-28 08:52:41.227294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.297 [2024-09-28 08:52:41.227705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.297 [2024-09-28 08:52:41.227733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.297 [2024-09-28 08:52:41.227803] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.297 [2024-09-28 08:52:41.227824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.297 pt2 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.297 [2024-09-28 08:52:41.239174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.297 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.557 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.557 "name": "raid_bdev1", 00:15:03.557 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:03.557 "strip_size_kb": 64, 00:15:03.557 "state": "configuring", 00:15:03.557 "raid_level": "raid5f", 00:15:03.557 "superblock": true, 00:15:03.557 "num_base_bdevs": 3, 00:15:03.557 "num_base_bdevs_discovered": 1, 00:15:03.557 "num_base_bdevs_operational": 3, 00:15:03.557 "base_bdevs_list": [ 00:15:03.557 { 00:15:03.557 "name": "pt1", 00:15:03.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.557 "is_configured": true, 00:15:03.557 "data_offset": 2048, 00:15:03.557 "data_size": 63488 00:15:03.557 }, 00:15:03.557 { 00:15:03.557 "name": null, 00:15:03.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.557 "is_configured": false, 00:15:03.557 "data_offset": 0, 00:15:03.557 "data_size": 63488 00:15:03.557 }, 00:15:03.557 { 00:15:03.557 "name": null, 00:15:03.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.557 "is_configured": false, 00:15:03.557 "data_offset": 2048, 00:15:03.557 "data_size": 63488 00:15:03.557 } 00:15:03.557 ] 00:15:03.557 }' 00:15:03.557 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.557 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.817 [2024-09-28 08:52:41.734292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.817 [2024-09-28 08:52:41.734348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.817 [2024-09-28 08:52:41.734363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:03.817 [2024-09-28 08:52:41.734375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.817 [2024-09-28 08:52:41.734741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.817 [2024-09-28 08:52:41.734763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.817 [2024-09-28 08:52:41.734821] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.817 [2024-09-28 08:52:41.734844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.817 pt2 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.817 [2024-09-28 08:52:41.746305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.817 [2024-09-28 08:52:41.746400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.817 [2024-09-28 08:52:41.746434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:03.817 [2024-09-28 08:52:41.746468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.817 [2024-09-28 08:52:41.746855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.817 [2024-09-28 08:52:41.746925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.817 [2024-09-28 08:52:41.747017] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.817 [2024-09-28 08:52:41.747070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.817 [2024-09-28 08:52:41.747232] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:03.817 [2024-09-28 08:52:41.747291] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:03.817 [2024-09-28 08:52:41.747546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:03.817 [2024-09-28 08:52:41.752870] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:03.817 [2024-09-28 08:52:41.752930] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:03.817 [2024-09-28 08:52:41.753148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.817 pt3 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.817 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.818 "name": "raid_bdev1", 00:15:03.818 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:03.818 "strip_size_kb": 64, 00:15:03.818 "state": "online", 00:15:03.818 "raid_level": "raid5f", 00:15:03.818 "superblock": true, 00:15:03.818 "num_base_bdevs": 3, 00:15:03.818 "num_base_bdevs_discovered": 3, 00:15:03.818 "num_base_bdevs_operational": 3, 00:15:03.818 "base_bdevs_list": [ 00:15:03.818 { 00:15:03.818 "name": "pt1", 00:15:03.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.818 "is_configured": true, 00:15:03.818 "data_offset": 2048, 00:15:03.818 "data_size": 63488 00:15:03.818 }, 00:15:03.818 { 00:15:03.818 "name": "pt2", 00:15:03.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.818 "is_configured": true, 00:15:03.818 "data_offset": 2048, 00:15:03.818 "data_size": 63488 00:15:03.818 }, 00:15:03.818 { 00:15:03.818 "name": "pt3", 00:15:03.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:03.818 "is_configured": true, 00:15:03.818 "data_offset": 2048, 00:15:03.818 "data_size": 63488 00:15:03.818 } 00:15:03.818 ] 00:15:03.818 }' 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.818 08:52:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.387 [2024-09-28 08:52:42.227065] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.387 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.387 "name": "raid_bdev1", 00:15:04.387 "aliases": [ 00:15:04.387 "f867fa88-43b6-49f0-b63b-9bfcba68b083" 00:15:04.387 ], 00:15:04.387 "product_name": "Raid Volume", 00:15:04.387 "block_size": 512, 00:15:04.387 "num_blocks": 126976, 00:15:04.387 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:04.387 "assigned_rate_limits": { 00:15:04.387 "rw_ios_per_sec": 0, 00:15:04.387 "rw_mbytes_per_sec": 0, 00:15:04.387 "r_mbytes_per_sec": 0, 00:15:04.387 "w_mbytes_per_sec": 0 00:15:04.387 }, 00:15:04.387 "claimed": false, 00:15:04.387 "zoned": false, 00:15:04.387 "supported_io_types": { 00:15:04.387 "read": true, 00:15:04.387 "write": true, 00:15:04.387 "unmap": false, 00:15:04.387 "flush": false, 00:15:04.388 "reset": true, 00:15:04.388 "nvme_admin": false, 00:15:04.388 "nvme_io": false, 00:15:04.388 "nvme_io_md": false, 00:15:04.388 "write_zeroes": true, 00:15:04.388 "zcopy": false, 00:15:04.388 "get_zone_info": false, 00:15:04.388 "zone_management": false, 00:15:04.388 "zone_append": false, 00:15:04.388 "compare": false, 00:15:04.388 "compare_and_write": false, 00:15:04.388 "abort": false, 00:15:04.388 "seek_hole": false, 00:15:04.388 "seek_data": false, 00:15:04.388 "copy": false, 00:15:04.388 "nvme_iov_md": false 00:15:04.388 }, 00:15:04.388 "driver_specific": { 00:15:04.388 "raid": { 00:15:04.388 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:04.388 "strip_size_kb": 64, 00:15:04.388 "state": "online", 00:15:04.388 "raid_level": "raid5f", 00:15:04.388 "superblock": true, 00:15:04.388 "num_base_bdevs": 3, 00:15:04.388 "num_base_bdevs_discovered": 3, 00:15:04.388 "num_base_bdevs_operational": 3, 00:15:04.388 "base_bdevs_list": [ 00:15:04.388 { 00:15:04.388 "name": "pt1", 00:15:04.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.388 "is_configured": true, 00:15:04.388 "data_offset": 2048, 00:15:04.388 "data_size": 63488 00:15:04.388 }, 00:15:04.388 { 00:15:04.388 "name": "pt2", 00:15:04.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.388 "is_configured": true, 00:15:04.388 "data_offset": 2048, 00:15:04.388 "data_size": 63488 00:15:04.388 }, 00:15:04.388 { 00:15:04.388 "name": "pt3", 00:15:04.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.388 "is_configured": true, 00:15:04.388 "data_offset": 2048, 00:15:04.388 "data_size": 63488 00:15:04.388 } 00:15:04.388 ] 00:15:04.388 } 00:15:04.388 } 00:15:04.388 }' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.388 pt2 00:15:04.388 pt3' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.388 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.648 [2024-09-28 08:52:42.498583] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f867fa88-43b6-49f0-b63b-9bfcba68b083 '!=' f867fa88-43b6-49f0-b63b-9bfcba68b083 ']' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.648 [2024-09-28 08:52:42.542398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.648 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.648 "name": "raid_bdev1", 00:15:04.648 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:04.648 "strip_size_kb": 64, 00:15:04.648 "state": "online", 00:15:04.648 "raid_level": "raid5f", 00:15:04.648 "superblock": true, 00:15:04.648 "num_base_bdevs": 3, 00:15:04.648 "num_base_bdevs_discovered": 2, 00:15:04.648 "num_base_bdevs_operational": 2, 00:15:04.648 "base_bdevs_list": [ 00:15:04.648 { 00:15:04.648 "name": null, 00:15:04.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.648 "is_configured": false, 00:15:04.648 "data_offset": 0, 00:15:04.648 "data_size": 63488 00:15:04.648 }, 00:15:04.648 { 00:15:04.648 "name": "pt2", 00:15:04.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.648 "is_configured": true, 00:15:04.648 "data_offset": 2048, 00:15:04.648 "data_size": 63488 00:15:04.648 }, 00:15:04.648 { 00:15:04.648 "name": "pt3", 00:15:04.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.648 "is_configured": true, 00:15:04.649 "data_offset": 2048, 00:15:04.649 "data_size": 63488 00:15:04.649 } 00:15:04.649 ] 00:15:04.649 }' 00:15:04.649 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.649 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 [2024-09-28 08:52:42.981602] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.218 [2024-09-28 08:52:42.981691] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.218 [2024-09-28 08:52:42.981775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.218 [2024-09-28 08:52:42.981825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.218 [2024-09-28 08:52:42.981840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:05.218 08:52:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.218 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.219 [2024-09-28 08:52:43.069430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.219 [2024-09-28 08:52:43.069484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.219 [2024-09-28 08:52:43.069499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:05.219 [2024-09-28 08:52:43.069510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.219 [2024-09-28 08:52:43.071606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.219 [2024-09-28 08:52:43.071661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.219 [2024-09-28 08:52:43.071728] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.219 [2024-09-28 08:52:43.071774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.219 pt2 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.219 "name": "raid_bdev1", 00:15:05.219 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:05.219 "strip_size_kb": 64, 00:15:05.219 "state": "configuring", 00:15:05.219 "raid_level": "raid5f", 00:15:05.219 "superblock": true, 00:15:05.219 "num_base_bdevs": 3, 00:15:05.219 "num_base_bdevs_discovered": 1, 00:15:05.219 "num_base_bdevs_operational": 2, 00:15:05.219 "base_bdevs_list": [ 00:15:05.219 { 00:15:05.219 "name": null, 00:15:05.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.219 "is_configured": false, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 }, 00:15:05.219 { 00:15:05.219 "name": "pt2", 00:15:05.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.219 "is_configured": true, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 }, 00:15:05.219 { 00:15:05.219 "name": null, 00:15:05.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.219 "is_configured": false, 00:15:05.219 "data_offset": 2048, 00:15:05.219 "data_size": 63488 00:15:05.219 } 00:15:05.219 ] 00:15:05.219 }' 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.219 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.787 [2024-09-28 08:52:43.564576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.787 [2024-09-28 08:52:43.564703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.787 [2024-09-28 08:52:43.564730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:05.787 [2024-09-28 08:52:43.564743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.787 [2024-09-28 08:52:43.565160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.787 [2024-09-28 08:52:43.565180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.787 [2024-09-28 08:52:43.565243] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:05.787 [2024-09-28 08:52:43.565277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.787 [2024-09-28 08:52:43.565390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:05.787 [2024-09-28 08:52:43.565402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.787 [2024-09-28 08:52:43.565616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:05.787 pt3 00:15:05.787 [2024-09-28 08:52:43.570709] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:05.787 [2024-09-28 08:52:43.570730] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:05.787 [2024-09-28 08:52:43.571018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.787 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.788 "name": "raid_bdev1", 00:15:05.788 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:05.788 "strip_size_kb": 64, 00:15:05.788 "state": "online", 00:15:05.788 "raid_level": "raid5f", 00:15:05.788 "superblock": true, 00:15:05.788 "num_base_bdevs": 3, 00:15:05.788 "num_base_bdevs_discovered": 2, 00:15:05.788 "num_base_bdevs_operational": 2, 00:15:05.788 "base_bdevs_list": [ 00:15:05.788 { 00:15:05.788 "name": null, 00:15:05.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.788 "is_configured": false, 00:15:05.788 "data_offset": 2048, 00:15:05.788 "data_size": 63488 00:15:05.788 }, 00:15:05.788 { 00:15:05.788 "name": "pt2", 00:15:05.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.788 "is_configured": true, 00:15:05.788 "data_offset": 2048, 00:15:05.788 "data_size": 63488 00:15:05.788 }, 00:15:05.788 { 00:15:05.788 "name": "pt3", 00:15:05.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.788 "is_configured": true, 00:15:05.788 "data_offset": 2048, 00:15:05.788 "data_size": 63488 00:15:05.788 } 00:15:05.788 ] 00:15:05.788 }' 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.788 08:52:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.048 [2024-09-28 08:52:44.024468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.048 [2024-09-28 08:52:44.024546] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.048 [2024-09-28 08:52:44.024628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.048 [2024-09-28 08:52:44.024738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.048 [2024-09-28 08:52:44.024802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:06.048 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.308 [2024-09-28 08:52:44.096369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.308 [2024-09-28 08:52:44.096474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.308 [2024-09-28 08:52:44.096514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:06.308 [2024-09-28 08:52:44.096525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.308 [2024-09-28 08:52:44.098710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.308 [2024-09-28 08:52:44.098744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.308 [2024-09-28 08:52:44.098810] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.308 [2024-09-28 08:52:44.098857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.308 [2024-09-28 08:52:44.098976] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:06.308 [2024-09-28 08:52:44.099019] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.308 [2024-09-28 08:52:44.099036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:06.308 [2024-09-28 08:52:44.099108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.308 pt1 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.308 "name": "raid_bdev1", 00:15:06.308 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:06.308 "strip_size_kb": 64, 00:15:06.308 "state": "configuring", 00:15:06.308 "raid_level": "raid5f", 00:15:06.308 "superblock": true, 00:15:06.308 "num_base_bdevs": 3, 00:15:06.308 "num_base_bdevs_discovered": 1, 00:15:06.308 "num_base_bdevs_operational": 2, 00:15:06.308 "base_bdevs_list": [ 00:15:06.308 { 00:15:06.308 "name": null, 00:15:06.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.308 "is_configured": false, 00:15:06.308 "data_offset": 2048, 00:15:06.308 "data_size": 63488 00:15:06.308 }, 00:15:06.308 { 00:15:06.308 "name": "pt2", 00:15:06.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.308 "is_configured": true, 00:15:06.308 "data_offset": 2048, 00:15:06.308 "data_size": 63488 00:15:06.308 }, 00:15:06.308 { 00:15:06.308 "name": null, 00:15:06.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.308 "is_configured": false, 00:15:06.308 "data_offset": 2048, 00:15:06.308 "data_size": 63488 00:15:06.308 } 00:15:06.308 ] 00:15:06.308 }' 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.308 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.878 [2024-09-28 08:52:44.647488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.878 [2024-09-28 08:52:44.647542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.878 [2024-09-28 08:52:44.647561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:06.878 [2024-09-28 08:52:44.647572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.878 [2024-09-28 08:52:44.647970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.878 [2024-09-28 08:52:44.648003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.878 [2024-09-28 08:52:44.648070] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.878 [2024-09-28 08:52:44.648091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.878 [2024-09-28 08:52:44.648206] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:06.878 [2024-09-28 08:52:44.648215] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:06.878 [2024-09-28 08:52:44.648486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:06.878 [2024-09-28 08:52:44.653450] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:06.878 [2024-09-28 08:52:44.653479] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:06.878 [2024-09-28 08:52:44.653730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.878 pt3 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.878 "name": "raid_bdev1", 00:15:06.878 "uuid": "f867fa88-43b6-49f0-b63b-9bfcba68b083", 00:15:06.878 "strip_size_kb": 64, 00:15:06.878 "state": "online", 00:15:06.878 "raid_level": "raid5f", 00:15:06.878 "superblock": true, 00:15:06.878 "num_base_bdevs": 3, 00:15:06.878 "num_base_bdevs_discovered": 2, 00:15:06.878 "num_base_bdevs_operational": 2, 00:15:06.878 "base_bdevs_list": [ 00:15:06.878 { 00:15:06.878 "name": null, 00:15:06.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.878 "is_configured": false, 00:15:06.878 "data_offset": 2048, 00:15:06.878 "data_size": 63488 00:15:06.878 }, 00:15:06.878 { 00:15:06.878 "name": "pt2", 00:15:06.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.878 "is_configured": true, 00:15:06.878 "data_offset": 2048, 00:15:06.878 "data_size": 63488 00:15:06.878 }, 00:15:06.878 { 00:15:06.878 "name": "pt3", 00:15:06.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.878 "is_configured": true, 00:15:06.878 "data_offset": 2048, 00:15:06.878 "data_size": 63488 00:15:06.878 } 00:15:06.878 ] 00:15:06.878 }' 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.878 08:52:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.138 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.138 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:07.138 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.139 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.139 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.399 [2024-09-28 08:52:45.143570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f867fa88-43b6-49f0-b63b-9bfcba68b083 '!=' f867fa88-43b6-49f0-b63b-9bfcba68b083 ']' 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81119 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81119 ']' 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81119 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81119 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81119' 00:15:07.399 killing process with pid 81119 00:15:07.399 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81119 00:15:07.399 [2024-09-28 08:52:45.211014] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.399 [2024-09-28 08:52:45.211143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.399 [2024-09-28 08:52:45.211231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 08:52:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81119 00:15:07.399 ee all in destruct 00:15:07.399 [2024-09-28 08:52:45.211324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:07.659 [2024-09-28 08:52:45.489663] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.040 08:52:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:09.040 00:15:09.040 real 0m7.984s 00:15:09.040 user 0m12.377s 00:15:09.040 sys 0m1.523s 00:15:09.040 08:52:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.041 08:52:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 ************************************ 00:15:09.041 END TEST raid5f_superblock_test 00:15:09.041 ************************************ 00:15:09.041 08:52:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:09.041 08:52:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:09.041 08:52:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:09.041 08:52:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.041 08:52:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 ************************************ 00:15:09.041 START TEST raid5f_rebuild_test 00:15:09.041 ************************************ 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81567 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81567 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81567 ']' 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.041 08:52:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.041 [2024-09-28 08:52:46.879917] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:09.041 [2024-09-28 08:52:46.880109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.041 Zero copy mechanism will not be used. 00:15:09.041 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81567 ] 00:15:09.301 [2024-09-28 08:52:47.046562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.301 [2024-09-28 08:52:47.243714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.563 [2024-09-28 08:52:47.436222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.563 [2024-09-28 08:52:47.436278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.830 BaseBdev1_malloc 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.830 [2024-09-28 08:52:47.753698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:09.830 [2024-09-28 08:52:47.753827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.830 [2024-09-28 08:52:47.753873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.830 [2024-09-28 08:52:47.753916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.830 [2024-09-28 08:52:47.756068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.830 [2024-09-28 08:52:47.756154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.830 BaseBdev1 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.830 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:09.831 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.831 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.831 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 BaseBdev2_malloc 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 [2024-09-28 08:52:47.837088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.104 [2024-09-28 08:52:47.837150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.104 [2024-09-28 08:52:47.837172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.104 [2024-09-28 08:52:47.837184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.104 [2024-09-28 08:52:47.839262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.104 [2024-09-28 08:52:47.839308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.104 BaseBdev2 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 BaseBdev3_malloc 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 [2024-09-28 08:52:47.892945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.104 [2024-09-28 08:52:47.893050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.104 [2024-09-28 08:52:47.893109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.104 [2024-09-28 08:52:47.893147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.104 [2024-09-28 08:52:47.895222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.104 [2024-09-28 08:52:47.895335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.104 BaseBdev3 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 spare_malloc 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 spare_delay 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 [2024-09-28 08:52:47.959691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.104 [2024-09-28 08:52:47.959802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.104 [2024-09-28 08:52:47.959841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.104 [2024-09-28 08:52:47.959893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.104 [2024-09-28 08:52:47.962011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.104 [2024-09-28 08:52:47.962115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.104 spare 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.104 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.104 [2024-09-28 08:52:47.971745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.104 [2024-09-28 08:52:47.973427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.104 [2024-09-28 08:52:47.973487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.104 [2024-09-28 08:52:47.973572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.104 [2024-09-28 08:52:47.973580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:10.104 [2024-09-28 08:52:47.973835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.104 [2024-09-28 08:52:47.979362] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.104 [2024-09-28 08:52:47.979445] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.104 [2024-09-28 08:52:47.979719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.105 08:52:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.105 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.105 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.105 "name": "raid_bdev1", 00:15:10.105 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:10.105 "strip_size_kb": 64, 00:15:10.105 "state": "online", 00:15:10.105 "raid_level": "raid5f", 00:15:10.105 "superblock": false, 00:15:10.105 "num_base_bdevs": 3, 00:15:10.105 "num_base_bdevs_discovered": 3, 00:15:10.105 "num_base_bdevs_operational": 3, 00:15:10.105 "base_bdevs_list": [ 00:15:10.105 { 00:15:10.105 "name": "BaseBdev1", 00:15:10.105 "uuid": "1f3b4440-ef84-55d6-87af-5572b8f37e8f", 00:15:10.105 "is_configured": true, 00:15:10.105 "data_offset": 0, 00:15:10.105 "data_size": 65536 00:15:10.105 }, 00:15:10.105 { 00:15:10.105 "name": "BaseBdev2", 00:15:10.105 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:10.105 "is_configured": true, 00:15:10.105 "data_offset": 0, 00:15:10.105 "data_size": 65536 00:15:10.105 }, 00:15:10.105 { 00:15:10.105 "name": "BaseBdev3", 00:15:10.105 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:10.105 "is_configured": true, 00:15:10.105 "data_offset": 0, 00:15:10.105 "data_size": 65536 00:15:10.105 } 00:15:10.105 ] 00:15:10.105 }' 00:15:10.105 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.105 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.683 [2024-09-28 08:52:48.401698] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:10.683 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.684 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:10.684 [2024-09-28 08:52:48.657102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.943 /dev/nbd0 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:10.943 1+0 records in 00:15:10.943 1+0 records out 00:15:10.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388703 s, 10.5 MB/s 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:10.943 08:52:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:11.203 512+0 records in 00:15:11.203 512+0 records out 00:15:11.203 67108864 bytes (67 MB, 64 MiB) copied, 0.37652 s, 178 MB/s 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.203 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:11.463 [2024-09-28 08:52:49.320945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.463 [2024-09-28 08:52:49.336141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.463 "name": "raid_bdev1", 00:15:11.463 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:11.463 "strip_size_kb": 64, 00:15:11.463 "state": "online", 00:15:11.463 "raid_level": "raid5f", 00:15:11.463 "superblock": false, 00:15:11.463 "num_base_bdevs": 3, 00:15:11.463 "num_base_bdevs_discovered": 2, 00:15:11.463 "num_base_bdevs_operational": 2, 00:15:11.463 "base_bdevs_list": [ 00:15:11.463 { 00:15:11.463 "name": null, 00:15:11.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.463 "is_configured": false, 00:15:11.463 "data_offset": 0, 00:15:11.463 "data_size": 65536 00:15:11.463 }, 00:15:11.463 { 00:15:11.463 "name": "BaseBdev2", 00:15:11.463 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:11.463 "is_configured": true, 00:15:11.463 "data_offset": 0, 00:15:11.463 "data_size": 65536 00:15:11.463 }, 00:15:11.463 { 00:15:11.463 "name": "BaseBdev3", 00:15:11.463 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:11.463 "is_configured": true, 00:15:11.463 "data_offset": 0, 00:15:11.463 "data_size": 65536 00:15:11.463 } 00:15:11.463 ] 00:15:11.463 }' 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.463 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.032 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.032 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.032 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.032 [2024-09-28 08:52:49.811373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.032 [2024-09-28 08:52:49.826607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:12.032 08:52:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.032 08:52:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.032 [2024-09-28 08:52:49.833970] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.969 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.969 "name": "raid_bdev1", 00:15:12.969 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:12.969 "strip_size_kb": 64, 00:15:12.969 "state": "online", 00:15:12.969 "raid_level": "raid5f", 00:15:12.969 "superblock": false, 00:15:12.969 "num_base_bdevs": 3, 00:15:12.969 "num_base_bdevs_discovered": 3, 00:15:12.969 "num_base_bdevs_operational": 3, 00:15:12.969 "process": { 00:15:12.969 "type": "rebuild", 00:15:12.969 "target": "spare", 00:15:12.969 "progress": { 00:15:12.969 "blocks": 20480, 00:15:12.969 "percent": 15 00:15:12.969 } 00:15:12.969 }, 00:15:12.969 "base_bdevs_list": [ 00:15:12.969 { 00:15:12.969 "name": "spare", 00:15:12.969 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:12.969 "is_configured": true, 00:15:12.969 "data_offset": 0, 00:15:12.969 "data_size": 65536 00:15:12.969 }, 00:15:12.970 { 00:15:12.970 "name": "BaseBdev2", 00:15:12.970 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:12.970 "is_configured": true, 00:15:12.970 "data_offset": 0, 00:15:12.970 "data_size": 65536 00:15:12.970 }, 00:15:12.970 { 00:15:12.970 "name": "BaseBdev3", 00:15:12.970 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:12.970 "is_configured": true, 00:15:12.970 "data_offset": 0, 00:15:12.970 "data_size": 65536 00:15:12.970 } 00:15:12.970 ] 00:15:12.970 }' 00:15:12.970 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.970 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.970 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.229 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.229 08:52:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.229 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.229 08:52:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.229 [2024-09-28 08:52:50.984566] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.229 [2024-09-28 08:52:51.043265] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.229 [2024-09-28 08:52:51.043321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.229 [2024-09-28 08:52:51.043343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.229 [2024-09-28 08:52:51.043351] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.229 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.229 "name": "raid_bdev1", 00:15:13.229 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:13.229 "strip_size_kb": 64, 00:15:13.229 "state": "online", 00:15:13.229 "raid_level": "raid5f", 00:15:13.229 "superblock": false, 00:15:13.229 "num_base_bdevs": 3, 00:15:13.229 "num_base_bdevs_discovered": 2, 00:15:13.230 "num_base_bdevs_operational": 2, 00:15:13.230 "base_bdevs_list": [ 00:15:13.230 { 00:15:13.230 "name": null, 00:15:13.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.230 "is_configured": false, 00:15:13.230 "data_offset": 0, 00:15:13.230 "data_size": 65536 00:15:13.230 }, 00:15:13.230 { 00:15:13.230 "name": "BaseBdev2", 00:15:13.230 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:13.230 "is_configured": true, 00:15:13.230 "data_offset": 0, 00:15:13.230 "data_size": 65536 00:15:13.230 }, 00:15:13.230 { 00:15:13.230 "name": "BaseBdev3", 00:15:13.230 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:13.230 "is_configured": true, 00:15:13.230 "data_offset": 0, 00:15:13.230 "data_size": 65536 00:15:13.230 } 00:15:13.230 ] 00:15:13.230 }' 00:15:13.230 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.230 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.798 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.798 "name": "raid_bdev1", 00:15:13.798 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:13.798 "strip_size_kb": 64, 00:15:13.798 "state": "online", 00:15:13.798 "raid_level": "raid5f", 00:15:13.798 "superblock": false, 00:15:13.798 "num_base_bdevs": 3, 00:15:13.798 "num_base_bdevs_discovered": 2, 00:15:13.798 "num_base_bdevs_operational": 2, 00:15:13.798 "base_bdevs_list": [ 00:15:13.798 { 00:15:13.798 "name": null, 00:15:13.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.799 "is_configured": false, 00:15:13.799 "data_offset": 0, 00:15:13.799 "data_size": 65536 00:15:13.799 }, 00:15:13.799 { 00:15:13.799 "name": "BaseBdev2", 00:15:13.799 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:13.799 "is_configured": true, 00:15:13.799 "data_offset": 0, 00:15:13.799 "data_size": 65536 00:15:13.799 }, 00:15:13.799 { 00:15:13.799 "name": "BaseBdev3", 00:15:13.799 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:13.799 "is_configured": true, 00:15:13.799 "data_offset": 0, 00:15:13.799 "data_size": 65536 00:15:13.799 } 00:15:13.799 ] 00:15:13.799 }' 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.799 [2024-09-28 08:52:51.604180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.799 [2024-09-28 08:52:51.618715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.799 08:52:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.799 [2024-09-28 08:52:51.625921] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.738 "name": "raid_bdev1", 00:15:14.738 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:14.738 "strip_size_kb": 64, 00:15:14.738 "state": "online", 00:15:14.738 "raid_level": "raid5f", 00:15:14.738 "superblock": false, 00:15:14.738 "num_base_bdevs": 3, 00:15:14.738 "num_base_bdevs_discovered": 3, 00:15:14.738 "num_base_bdevs_operational": 3, 00:15:14.738 "process": { 00:15:14.738 "type": "rebuild", 00:15:14.738 "target": "spare", 00:15:14.738 "progress": { 00:15:14.738 "blocks": 20480, 00:15:14.738 "percent": 15 00:15:14.738 } 00:15:14.738 }, 00:15:14.738 "base_bdevs_list": [ 00:15:14.738 { 00:15:14.738 "name": "spare", 00:15:14.738 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:14.738 "is_configured": true, 00:15:14.738 "data_offset": 0, 00:15:14.738 "data_size": 65536 00:15:14.738 }, 00:15:14.738 { 00:15:14.738 "name": "BaseBdev2", 00:15:14.738 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:14.738 "is_configured": true, 00:15:14.738 "data_offset": 0, 00:15:14.738 "data_size": 65536 00:15:14.738 }, 00:15:14.738 { 00:15:14.738 "name": "BaseBdev3", 00:15:14.738 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:14.738 "is_configured": true, 00:15:14.738 "data_offset": 0, 00:15:14.738 "data_size": 65536 00:15:14.738 } 00:15:14.738 ] 00:15:14.738 }' 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.738 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=552 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.997 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.998 "name": "raid_bdev1", 00:15:14.998 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:14.998 "strip_size_kb": 64, 00:15:14.998 "state": "online", 00:15:14.998 "raid_level": "raid5f", 00:15:14.998 "superblock": false, 00:15:14.998 "num_base_bdevs": 3, 00:15:14.998 "num_base_bdevs_discovered": 3, 00:15:14.998 "num_base_bdevs_operational": 3, 00:15:14.998 "process": { 00:15:14.998 "type": "rebuild", 00:15:14.998 "target": "spare", 00:15:14.998 "progress": { 00:15:14.998 "blocks": 22528, 00:15:14.998 "percent": 17 00:15:14.998 } 00:15:14.998 }, 00:15:14.998 "base_bdevs_list": [ 00:15:14.998 { 00:15:14.998 "name": "spare", 00:15:14.998 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:14.998 "is_configured": true, 00:15:14.998 "data_offset": 0, 00:15:14.998 "data_size": 65536 00:15:14.998 }, 00:15:14.998 { 00:15:14.998 "name": "BaseBdev2", 00:15:14.998 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:14.998 "is_configured": true, 00:15:14.998 "data_offset": 0, 00:15:14.998 "data_size": 65536 00:15:14.998 }, 00:15:14.998 { 00:15:14.998 "name": "BaseBdev3", 00:15:14.998 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:14.998 "is_configured": true, 00:15:14.998 "data_offset": 0, 00:15:14.998 "data_size": 65536 00:15:14.998 } 00:15:14.998 ] 00:15:14.998 }' 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.998 08:52:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.937 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.197 "name": "raid_bdev1", 00:15:16.197 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:16.197 "strip_size_kb": 64, 00:15:16.197 "state": "online", 00:15:16.197 "raid_level": "raid5f", 00:15:16.197 "superblock": false, 00:15:16.197 "num_base_bdevs": 3, 00:15:16.197 "num_base_bdevs_discovered": 3, 00:15:16.197 "num_base_bdevs_operational": 3, 00:15:16.197 "process": { 00:15:16.197 "type": "rebuild", 00:15:16.197 "target": "spare", 00:15:16.197 "progress": { 00:15:16.197 "blocks": 45056, 00:15:16.197 "percent": 34 00:15:16.197 } 00:15:16.197 }, 00:15:16.197 "base_bdevs_list": [ 00:15:16.197 { 00:15:16.197 "name": "spare", 00:15:16.197 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:16.197 "is_configured": true, 00:15:16.197 "data_offset": 0, 00:15:16.197 "data_size": 65536 00:15:16.197 }, 00:15:16.197 { 00:15:16.197 "name": "BaseBdev2", 00:15:16.197 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:16.197 "is_configured": true, 00:15:16.197 "data_offset": 0, 00:15:16.197 "data_size": 65536 00:15:16.197 }, 00:15:16.197 { 00:15:16.197 "name": "BaseBdev3", 00:15:16.197 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:16.197 "is_configured": true, 00:15:16.197 "data_offset": 0, 00:15:16.197 "data_size": 65536 00:15:16.197 } 00:15:16.197 ] 00:15:16.197 }' 00:15:16.197 08:52:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.197 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.197 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.197 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.197 08:52:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.136 08:52:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.395 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.395 "name": "raid_bdev1", 00:15:17.395 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:17.395 "strip_size_kb": 64, 00:15:17.395 "state": "online", 00:15:17.395 "raid_level": "raid5f", 00:15:17.395 "superblock": false, 00:15:17.395 "num_base_bdevs": 3, 00:15:17.395 "num_base_bdevs_discovered": 3, 00:15:17.395 "num_base_bdevs_operational": 3, 00:15:17.395 "process": { 00:15:17.396 "type": "rebuild", 00:15:17.396 "target": "spare", 00:15:17.396 "progress": { 00:15:17.396 "blocks": 69632, 00:15:17.396 "percent": 53 00:15:17.396 } 00:15:17.396 }, 00:15:17.396 "base_bdevs_list": [ 00:15:17.396 { 00:15:17.396 "name": "spare", 00:15:17.396 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:17.396 "is_configured": true, 00:15:17.396 "data_offset": 0, 00:15:17.396 "data_size": 65536 00:15:17.396 }, 00:15:17.396 { 00:15:17.396 "name": "BaseBdev2", 00:15:17.396 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:17.396 "is_configured": true, 00:15:17.396 "data_offset": 0, 00:15:17.396 "data_size": 65536 00:15:17.396 }, 00:15:17.396 { 00:15:17.396 "name": "BaseBdev3", 00:15:17.396 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:17.396 "is_configured": true, 00:15:17.396 "data_offset": 0, 00:15:17.396 "data_size": 65536 00:15:17.396 } 00:15:17.396 ] 00:15:17.396 }' 00:15:17.396 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.396 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.396 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.396 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.396 08:52:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.336 "name": "raid_bdev1", 00:15:18.336 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:18.336 "strip_size_kb": 64, 00:15:18.336 "state": "online", 00:15:18.336 "raid_level": "raid5f", 00:15:18.336 "superblock": false, 00:15:18.336 "num_base_bdevs": 3, 00:15:18.336 "num_base_bdevs_discovered": 3, 00:15:18.336 "num_base_bdevs_operational": 3, 00:15:18.336 "process": { 00:15:18.336 "type": "rebuild", 00:15:18.336 "target": "spare", 00:15:18.336 "progress": { 00:15:18.336 "blocks": 92160, 00:15:18.336 "percent": 70 00:15:18.336 } 00:15:18.336 }, 00:15:18.336 "base_bdevs_list": [ 00:15:18.336 { 00:15:18.336 "name": "spare", 00:15:18.336 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:18.336 "is_configured": true, 00:15:18.336 "data_offset": 0, 00:15:18.336 "data_size": 65536 00:15:18.336 }, 00:15:18.336 { 00:15:18.336 "name": "BaseBdev2", 00:15:18.336 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:18.336 "is_configured": true, 00:15:18.336 "data_offset": 0, 00:15:18.336 "data_size": 65536 00:15:18.336 }, 00:15:18.336 { 00:15:18.336 "name": "BaseBdev3", 00:15:18.336 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:18.336 "is_configured": true, 00:15:18.336 "data_offset": 0, 00:15:18.336 "data_size": 65536 00:15:18.336 } 00:15:18.336 ] 00:15:18.336 }' 00:15:18.336 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.596 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.596 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.596 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.596 08:52:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.536 "name": "raid_bdev1", 00:15:19.536 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:19.536 "strip_size_kb": 64, 00:15:19.536 "state": "online", 00:15:19.536 "raid_level": "raid5f", 00:15:19.536 "superblock": false, 00:15:19.536 "num_base_bdevs": 3, 00:15:19.536 "num_base_bdevs_discovered": 3, 00:15:19.536 "num_base_bdevs_operational": 3, 00:15:19.536 "process": { 00:15:19.536 "type": "rebuild", 00:15:19.536 "target": "spare", 00:15:19.536 "progress": { 00:15:19.536 "blocks": 116736, 00:15:19.536 "percent": 89 00:15:19.536 } 00:15:19.536 }, 00:15:19.536 "base_bdevs_list": [ 00:15:19.536 { 00:15:19.536 "name": "spare", 00:15:19.536 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:19.536 "is_configured": true, 00:15:19.536 "data_offset": 0, 00:15:19.536 "data_size": 65536 00:15:19.536 }, 00:15:19.536 { 00:15:19.536 "name": "BaseBdev2", 00:15:19.536 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:19.536 "is_configured": true, 00:15:19.536 "data_offset": 0, 00:15:19.536 "data_size": 65536 00:15:19.536 }, 00:15:19.536 { 00:15:19.536 "name": "BaseBdev3", 00:15:19.536 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:19.536 "is_configured": true, 00:15:19.536 "data_offset": 0, 00:15:19.536 "data_size": 65536 00:15:19.536 } 00:15:19.536 ] 00:15:19.536 }' 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.536 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.796 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.796 08:52:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.365 [2024-09-28 08:52:58.070953] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.365 [2024-09-28 08:52:58.071033] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.365 [2024-09-28 08:52:58.071075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.624 "name": "raid_bdev1", 00:15:20.624 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:20.624 "strip_size_kb": 64, 00:15:20.624 "state": "online", 00:15:20.624 "raid_level": "raid5f", 00:15:20.624 "superblock": false, 00:15:20.624 "num_base_bdevs": 3, 00:15:20.624 "num_base_bdevs_discovered": 3, 00:15:20.624 "num_base_bdevs_operational": 3, 00:15:20.624 "base_bdevs_list": [ 00:15:20.624 { 00:15:20.624 "name": "spare", 00:15:20.624 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:20.624 "is_configured": true, 00:15:20.624 "data_offset": 0, 00:15:20.624 "data_size": 65536 00:15:20.624 }, 00:15:20.624 { 00:15:20.624 "name": "BaseBdev2", 00:15:20.624 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:20.624 "is_configured": true, 00:15:20.624 "data_offset": 0, 00:15:20.624 "data_size": 65536 00:15:20.624 }, 00:15:20.624 { 00:15:20.624 "name": "BaseBdev3", 00:15:20.624 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:20.624 "is_configured": true, 00:15:20.624 "data_offset": 0, 00:15:20.624 "data_size": 65536 00:15:20.624 } 00:15:20.624 ] 00:15:20.624 }' 00:15:20.624 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.883 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.883 "name": "raid_bdev1", 00:15:20.883 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:20.883 "strip_size_kb": 64, 00:15:20.883 "state": "online", 00:15:20.883 "raid_level": "raid5f", 00:15:20.883 "superblock": false, 00:15:20.883 "num_base_bdevs": 3, 00:15:20.883 "num_base_bdevs_discovered": 3, 00:15:20.883 "num_base_bdevs_operational": 3, 00:15:20.883 "base_bdevs_list": [ 00:15:20.883 { 00:15:20.883 "name": "spare", 00:15:20.883 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:20.883 "is_configured": true, 00:15:20.883 "data_offset": 0, 00:15:20.883 "data_size": 65536 00:15:20.883 }, 00:15:20.883 { 00:15:20.883 "name": "BaseBdev2", 00:15:20.883 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:20.883 "is_configured": true, 00:15:20.883 "data_offset": 0, 00:15:20.883 "data_size": 65536 00:15:20.883 }, 00:15:20.883 { 00:15:20.884 "name": "BaseBdev3", 00:15:20.884 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:20.884 "is_configured": true, 00:15:20.884 "data_offset": 0, 00:15:20.884 "data_size": 65536 00:15:20.884 } 00:15:20.884 ] 00:15:20.884 }' 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.884 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.143 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.143 "name": "raid_bdev1", 00:15:21.143 "uuid": "1faa71b2-2a21-4eaa-9a14-bd3c181a4884", 00:15:21.143 "strip_size_kb": 64, 00:15:21.143 "state": "online", 00:15:21.143 "raid_level": "raid5f", 00:15:21.143 "superblock": false, 00:15:21.143 "num_base_bdevs": 3, 00:15:21.143 "num_base_bdevs_discovered": 3, 00:15:21.143 "num_base_bdevs_operational": 3, 00:15:21.143 "base_bdevs_list": [ 00:15:21.143 { 00:15:21.143 "name": "spare", 00:15:21.143 "uuid": "1c36552c-dc5f-5284-a6dc-b3f0ba69abcb", 00:15:21.143 "is_configured": true, 00:15:21.143 "data_offset": 0, 00:15:21.143 "data_size": 65536 00:15:21.143 }, 00:15:21.143 { 00:15:21.143 "name": "BaseBdev2", 00:15:21.143 "uuid": "43be136c-65aa-504b-b9ee-2c0e18ee7baa", 00:15:21.143 "is_configured": true, 00:15:21.143 "data_offset": 0, 00:15:21.143 "data_size": 65536 00:15:21.143 }, 00:15:21.143 { 00:15:21.143 "name": "BaseBdev3", 00:15:21.143 "uuid": "8b3c1afc-e861-568e-881e-93ce0e9c38d1", 00:15:21.143 "is_configured": true, 00:15:21.143 "data_offset": 0, 00:15:21.143 "data_size": 65536 00:15:21.143 } 00:15:21.143 ] 00:15:21.143 }' 00:15:21.143 08:52:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.143 08:52:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.403 [2024-09-28 08:52:59.328242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.403 [2024-09-28 08:52:59.328326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.403 [2024-09-28 08:52:59.328439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.403 [2024-09-28 08:52:59.328550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.403 [2024-09-28 08:52:59.328604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.403 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:21.662 /dev/nbd0 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.662 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.663 1+0 records in 00:15:21.663 1+0 records out 00:15:21.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568121 s, 7.2 MB/s 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.663 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:21.922 /dev/nbd1 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.922 1+0 records in 00:15:21.922 1+0 records out 00:15:21.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534173 s, 7.7 MB/s 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:21.922 08:52:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.182 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.441 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81567 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81567 ']' 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81567 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81567 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81567' 00:15:22.701 killing process with pid 81567 00:15:22.701 Received shutdown signal, test time was about 60.000000 seconds 00:15:22.701 00:15:22.701 Latency(us) 00:15:22.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.701 =================================================================================================================== 00:15:22.701 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81567 00:15:22.701 [2024-09-28 08:53:00.566496] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.701 08:53:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81567 00:15:23.270 [2024-09-28 08:53:00.973810] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.650 00:15:24.650 real 0m15.503s 00:15:24.650 user 0m18.898s 00:15:24.650 sys 0m2.172s 00:15:24.650 ************************************ 00:15:24.650 END TEST raid5f_rebuild_test 00:15:24.650 ************************************ 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 08:53:02 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:24.650 08:53:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:24.650 08:53:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.650 08:53:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 ************************************ 00:15:24.650 START TEST raid5f_rebuild_test_sb 00:15:24.650 ************************************ 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82007 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82007 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82007 ']' 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.650 08:53:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 [2024-09-28 08:53:02.472835] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:24.650 [2024-09-28 08:53:02.473047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82007 ] 00:15:24.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:24.650 Zero copy mechanism will not be used. 00:15:24.650 [2024-09-28 08:53:02.643027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.910 [2024-09-28 08:53:02.879475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.169 [2024-09-28 08:53:03.109472] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.169 [2024-09-28 08:53:03.109587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.429 BaseBdev1_malloc 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.429 [2024-09-28 08:53:03.354787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.429 [2024-09-28 08:53:03.354856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.429 [2024-09-28 08:53:03.354882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:25.429 [2024-09-28 08:53:03.354897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.429 [2024-09-28 08:53:03.357311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.429 [2024-09-28 08:53:03.357390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.429 BaseBdev1 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.429 BaseBdev2_malloc 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.429 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.429 [2024-09-28 08:53:03.422199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:25.429 [2024-09-28 08:53:03.422262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.429 [2024-09-28 08:53:03.422282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:25.429 [2024-09-28 08:53:03.422296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.692 [2024-09-28 08:53:03.424647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.692 [2024-09-28 08:53:03.424696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.692 BaseBdev2 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 BaseBdev3_malloc 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 [2024-09-28 08:53:03.483240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:25.692 [2024-09-28 08:53:03.483299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.692 [2024-09-28 08:53:03.483321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:25.692 [2024-09-28 08:53:03.483332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.692 [2024-09-28 08:53:03.485622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.692 [2024-09-28 08:53:03.485671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:25.692 BaseBdev3 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 spare_malloc 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 spare_delay 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 [2024-09-28 08:53:03.558178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.692 [2024-09-28 08:53:03.558268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.692 [2024-09-28 08:53:03.558288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:25.692 [2024-09-28 08:53:03.558299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.692 [2024-09-28 08:53:03.560701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.692 [2024-09-28 08:53:03.560756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.692 spare 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 [2024-09-28 08:53:03.570242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.692 [2024-09-28 08:53:03.572257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.692 [2024-09-28 08:53:03.572321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.692 [2024-09-28 08:53:03.572514] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:25.692 [2024-09-28 08:53:03.572526] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:25.692 [2024-09-28 08:53:03.572778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.692 [2024-09-28 08:53:03.578181] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:25.692 [2024-09-28 08:53:03.578220] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:25.692 [2024-09-28 08:53:03.578422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.692 "name": "raid_bdev1", 00:15:25.692 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:25.692 "strip_size_kb": 64, 00:15:25.692 "state": "online", 00:15:25.692 "raid_level": "raid5f", 00:15:25.692 "superblock": true, 00:15:25.692 "num_base_bdevs": 3, 00:15:25.692 "num_base_bdevs_discovered": 3, 00:15:25.692 "num_base_bdevs_operational": 3, 00:15:25.692 "base_bdevs_list": [ 00:15:25.692 { 00:15:25.692 "name": "BaseBdev1", 00:15:25.692 "uuid": "d458378b-c956-5a2e-bfa4-2e3e16fcb367", 00:15:25.692 "is_configured": true, 00:15:25.692 "data_offset": 2048, 00:15:25.692 "data_size": 63488 00:15:25.692 }, 00:15:25.692 { 00:15:25.692 "name": "BaseBdev2", 00:15:25.692 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:25.692 "is_configured": true, 00:15:25.692 "data_offset": 2048, 00:15:25.692 "data_size": 63488 00:15:25.692 }, 00:15:25.692 { 00:15:25.692 "name": "BaseBdev3", 00:15:25.692 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:25.692 "is_configured": true, 00:15:25.692 "data_offset": 2048, 00:15:25.692 "data_size": 63488 00:15:25.692 } 00:15:25.692 ] 00:15:25.692 }' 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.692 08:53:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 [2024-09-28 08:53:04.076893] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.261 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:26.522 [2024-09-28 08:53:04.372238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:26.522 /dev/nbd0 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.522 1+0 records in 00:15:26.522 1+0 records out 00:15:26.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451527 s, 9.1 MB/s 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:26.522 08:53:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:27.091 496+0 records in 00:15:27.091 496+0 records out 00:15:27.091 65011712 bytes (65 MB, 62 MiB) copied, 0.548738 s, 118 MB/s 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.091 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:27.351 [2024-09-28 08:53:05.223894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.351 [2024-09-28 08:53:05.243482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.351 "name": "raid_bdev1", 00:15:27.351 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:27.351 "strip_size_kb": 64, 00:15:27.351 "state": "online", 00:15:27.351 "raid_level": "raid5f", 00:15:27.351 "superblock": true, 00:15:27.351 "num_base_bdevs": 3, 00:15:27.351 "num_base_bdevs_discovered": 2, 00:15:27.351 "num_base_bdevs_operational": 2, 00:15:27.351 "base_bdevs_list": [ 00:15:27.351 { 00:15:27.351 "name": null, 00:15:27.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.351 "is_configured": false, 00:15:27.351 "data_offset": 0, 00:15:27.351 "data_size": 63488 00:15:27.351 }, 00:15:27.351 { 00:15:27.351 "name": "BaseBdev2", 00:15:27.351 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:27.351 "is_configured": true, 00:15:27.351 "data_offset": 2048, 00:15:27.351 "data_size": 63488 00:15:27.351 }, 00:15:27.351 { 00:15:27.351 "name": "BaseBdev3", 00:15:27.351 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:27.351 "is_configured": true, 00:15:27.351 "data_offset": 2048, 00:15:27.351 "data_size": 63488 00:15:27.351 } 00:15:27.351 ] 00:15:27.351 }' 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.351 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.920 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:27.920 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.920 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.921 [2024-09-28 08:53:05.714744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.921 [2024-09-28 08:53:05.731103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:27.921 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.921 08:53:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:27.921 [2024-09-28 08:53:05.738820] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.859 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.860 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.860 "name": "raid_bdev1", 00:15:28.860 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:28.860 "strip_size_kb": 64, 00:15:28.860 "state": "online", 00:15:28.860 "raid_level": "raid5f", 00:15:28.860 "superblock": true, 00:15:28.860 "num_base_bdevs": 3, 00:15:28.860 "num_base_bdevs_discovered": 3, 00:15:28.860 "num_base_bdevs_operational": 3, 00:15:28.860 "process": { 00:15:28.860 "type": "rebuild", 00:15:28.860 "target": "spare", 00:15:28.860 "progress": { 00:15:28.860 "blocks": 20480, 00:15:28.860 "percent": 16 00:15:28.860 } 00:15:28.860 }, 00:15:28.860 "base_bdevs_list": [ 00:15:28.860 { 00:15:28.860 "name": "spare", 00:15:28.860 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 }, 00:15:28.860 { 00:15:28.860 "name": "BaseBdev2", 00:15:28.860 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 }, 00:15:28.860 { 00:15:28.860 "name": "BaseBdev3", 00:15:28.860 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 } 00:15:28.860 ] 00:15:28.860 }' 00:15:28.860 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.860 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.860 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.120 [2024-09-28 08:53:06.874028] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.120 [2024-09-28 08:53:06.947904] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:29.120 [2024-09-28 08:53:06.948022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.120 [2024-09-28 08:53:06.948045] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.120 [2024-09-28 08:53:06.948054] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.120 08:53:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.120 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.120 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.120 "name": "raid_bdev1", 00:15:29.120 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:29.120 "strip_size_kb": 64, 00:15:29.120 "state": "online", 00:15:29.120 "raid_level": "raid5f", 00:15:29.120 "superblock": true, 00:15:29.120 "num_base_bdevs": 3, 00:15:29.120 "num_base_bdevs_discovered": 2, 00:15:29.120 "num_base_bdevs_operational": 2, 00:15:29.120 "base_bdevs_list": [ 00:15:29.120 { 00:15:29.120 "name": null, 00:15:29.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.120 "is_configured": false, 00:15:29.120 "data_offset": 0, 00:15:29.120 "data_size": 63488 00:15:29.120 }, 00:15:29.120 { 00:15:29.120 "name": "BaseBdev2", 00:15:29.120 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:29.120 "is_configured": true, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 }, 00:15:29.120 { 00:15:29.120 "name": "BaseBdev3", 00:15:29.120 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:29.120 "is_configured": true, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 } 00:15:29.120 ] 00:15:29.120 }' 00:15:29.120 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.120 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.690 "name": "raid_bdev1", 00:15:29.690 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:29.690 "strip_size_kb": 64, 00:15:29.690 "state": "online", 00:15:29.690 "raid_level": "raid5f", 00:15:29.690 "superblock": true, 00:15:29.690 "num_base_bdevs": 3, 00:15:29.690 "num_base_bdevs_discovered": 2, 00:15:29.690 "num_base_bdevs_operational": 2, 00:15:29.690 "base_bdevs_list": [ 00:15:29.690 { 00:15:29.690 "name": null, 00:15:29.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.690 "is_configured": false, 00:15:29.690 "data_offset": 0, 00:15:29.690 "data_size": 63488 00:15:29.690 }, 00:15:29.690 { 00:15:29.690 "name": "BaseBdev2", 00:15:29.690 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:29.690 "is_configured": true, 00:15:29.690 "data_offset": 2048, 00:15:29.690 "data_size": 63488 00:15:29.690 }, 00:15:29.690 { 00:15:29.690 "name": "BaseBdev3", 00:15:29.690 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:29.690 "is_configured": true, 00:15:29.690 "data_offset": 2048, 00:15:29.690 "data_size": 63488 00:15:29.690 } 00:15:29.690 ] 00:15:29.690 }' 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.690 [2024-09-28 08:53:07.529625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.690 [2024-09-28 08:53:07.544081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.690 08:53:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:29.690 [2024-09-28 08:53:07.551699] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.630 "name": "raid_bdev1", 00:15:30.630 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:30.630 "strip_size_kb": 64, 00:15:30.630 "state": "online", 00:15:30.630 "raid_level": "raid5f", 00:15:30.630 "superblock": true, 00:15:30.630 "num_base_bdevs": 3, 00:15:30.630 "num_base_bdevs_discovered": 3, 00:15:30.630 "num_base_bdevs_operational": 3, 00:15:30.630 "process": { 00:15:30.630 "type": "rebuild", 00:15:30.630 "target": "spare", 00:15:30.630 "progress": { 00:15:30.630 "blocks": 20480, 00:15:30.630 "percent": 16 00:15:30.630 } 00:15:30.630 }, 00:15:30.630 "base_bdevs_list": [ 00:15:30.630 { 00:15:30.630 "name": "spare", 00:15:30.630 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 2048, 00:15:30.630 "data_size": 63488 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev2", 00:15:30.630 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 2048, 00:15:30.630 "data_size": 63488 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev3", 00:15:30.630 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 2048, 00:15:30.630 "data_size": 63488 00:15:30.630 } 00:15:30.630 ] 00:15:30.630 }' 00:15:30.630 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:30.890 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=568 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.890 "name": "raid_bdev1", 00:15:30.890 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:30.890 "strip_size_kb": 64, 00:15:30.890 "state": "online", 00:15:30.890 "raid_level": "raid5f", 00:15:30.890 "superblock": true, 00:15:30.890 "num_base_bdevs": 3, 00:15:30.890 "num_base_bdevs_discovered": 3, 00:15:30.890 "num_base_bdevs_operational": 3, 00:15:30.890 "process": { 00:15:30.890 "type": "rebuild", 00:15:30.890 "target": "spare", 00:15:30.890 "progress": { 00:15:30.890 "blocks": 22528, 00:15:30.890 "percent": 17 00:15:30.890 } 00:15:30.890 }, 00:15:30.890 "base_bdevs_list": [ 00:15:30.890 { 00:15:30.890 "name": "spare", 00:15:30.890 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 }, 00:15:30.890 { 00:15:30.890 "name": "BaseBdev2", 00:15:30.890 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 }, 00:15:30.890 { 00:15:30.890 "name": "BaseBdev3", 00:15:30.890 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:30.890 "is_configured": true, 00:15:30.890 "data_offset": 2048, 00:15:30.890 "data_size": 63488 00:15:30.890 } 00:15:30.890 ] 00:15:30.890 }' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.890 08:53:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.270 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.270 "name": "raid_bdev1", 00:15:32.270 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:32.270 "strip_size_kb": 64, 00:15:32.270 "state": "online", 00:15:32.270 "raid_level": "raid5f", 00:15:32.270 "superblock": true, 00:15:32.270 "num_base_bdevs": 3, 00:15:32.270 "num_base_bdevs_discovered": 3, 00:15:32.270 "num_base_bdevs_operational": 3, 00:15:32.270 "process": { 00:15:32.270 "type": "rebuild", 00:15:32.270 "target": "spare", 00:15:32.270 "progress": { 00:15:32.270 "blocks": 47104, 00:15:32.271 "percent": 37 00:15:32.271 } 00:15:32.271 }, 00:15:32.271 "base_bdevs_list": [ 00:15:32.271 { 00:15:32.271 "name": "spare", 00:15:32.271 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:32.271 "is_configured": true, 00:15:32.271 "data_offset": 2048, 00:15:32.271 "data_size": 63488 00:15:32.271 }, 00:15:32.271 { 00:15:32.271 "name": "BaseBdev2", 00:15:32.271 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:32.271 "is_configured": true, 00:15:32.271 "data_offset": 2048, 00:15:32.271 "data_size": 63488 00:15:32.271 }, 00:15:32.271 { 00:15:32.271 "name": "BaseBdev3", 00:15:32.271 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:32.271 "is_configured": true, 00:15:32.271 "data_offset": 2048, 00:15:32.271 "data_size": 63488 00:15:32.271 } 00:15:32.271 ] 00:15:32.271 }' 00:15:32.271 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.271 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.271 08:53:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.271 08:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.271 08:53:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.210 "name": "raid_bdev1", 00:15:33.210 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:33.210 "strip_size_kb": 64, 00:15:33.210 "state": "online", 00:15:33.210 "raid_level": "raid5f", 00:15:33.210 "superblock": true, 00:15:33.210 "num_base_bdevs": 3, 00:15:33.210 "num_base_bdevs_discovered": 3, 00:15:33.210 "num_base_bdevs_operational": 3, 00:15:33.210 "process": { 00:15:33.210 "type": "rebuild", 00:15:33.210 "target": "spare", 00:15:33.210 "progress": { 00:15:33.210 "blocks": 69632, 00:15:33.210 "percent": 54 00:15:33.210 } 00:15:33.210 }, 00:15:33.210 "base_bdevs_list": [ 00:15:33.210 { 00:15:33.210 "name": "spare", 00:15:33.210 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:33.210 "is_configured": true, 00:15:33.210 "data_offset": 2048, 00:15:33.210 "data_size": 63488 00:15:33.210 }, 00:15:33.210 { 00:15:33.210 "name": "BaseBdev2", 00:15:33.210 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:33.210 "is_configured": true, 00:15:33.210 "data_offset": 2048, 00:15:33.210 "data_size": 63488 00:15:33.210 }, 00:15:33.210 { 00:15:33.210 "name": "BaseBdev3", 00:15:33.210 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:33.210 "is_configured": true, 00:15:33.210 "data_offset": 2048, 00:15:33.210 "data_size": 63488 00:15:33.210 } 00:15:33.210 ] 00:15:33.210 }' 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.210 08:53:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.592 "name": "raid_bdev1", 00:15:34.592 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:34.592 "strip_size_kb": 64, 00:15:34.592 "state": "online", 00:15:34.592 "raid_level": "raid5f", 00:15:34.592 "superblock": true, 00:15:34.592 "num_base_bdevs": 3, 00:15:34.592 "num_base_bdevs_discovered": 3, 00:15:34.592 "num_base_bdevs_operational": 3, 00:15:34.592 "process": { 00:15:34.592 "type": "rebuild", 00:15:34.592 "target": "spare", 00:15:34.592 "progress": { 00:15:34.592 "blocks": 92160, 00:15:34.592 "percent": 72 00:15:34.592 } 00:15:34.592 }, 00:15:34.592 "base_bdevs_list": [ 00:15:34.592 { 00:15:34.592 "name": "spare", 00:15:34.592 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:34.592 "is_configured": true, 00:15:34.592 "data_offset": 2048, 00:15:34.592 "data_size": 63488 00:15:34.592 }, 00:15:34.592 { 00:15:34.592 "name": "BaseBdev2", 00:15:34.592 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:34.592 "is_configured": true, 00:15:34.592 "data_offset": 2048, 00:15:34.592 "data_size": 63488 00:15:34.592 }, 00:15:34.592 { 00:15:34.592 "name": "BaseBdev3", 00:15:34.592 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:34.592 "is_configured": true, 00:15:34.592 "data_offset": 2048, 00:15:34.592 "data_size": 63488 00:15:34.592 } 00:15:34.592 ] 00:15:34.592 }' 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.592 08:53:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.531 "name": "raid_bdev1", 00:15:35.531 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:35.531 "strip_size_kb": 64, 00:15:35.531 "state": "online", 00:15:35.531 "raid_level": "raid5f", 00:15:35.531 "superblock": true, 00:15:35.531 "num_base_bdevs": 3, 00:15:35.531 "num_base_bdevs_discovered": 3, 00:15:35.531 "num_base_bdevs_operational": 3, 00:15:35.531 "process": { 00:15:35.531 "type": "rebuild", 00:15:35.531 "target": "spare", 00:15:35.531 "progress": { 00:15:35.531 "blocks": 116736, 00:15:35.531 "percent": 91 00:15:35.531 } 00:15:35.531 }, 00:15:35.531 "base_bdevs_list": [ 00:15:35.531 { 00:15:35.531 "name": "spare", 00:15:35.531 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:35.531 "is_configured": true, 00:15:35.531 "data_offset": 2048, 00:15:35.531 "data_size": 63488 00:15:35.531 }, 00:15:35.531 { 00:15:35.531 "name": "BaseBdev2", 00:15:35.531 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:35.531 "is_configured": true, 00:15:35.531 "data_offset": 2048, 00:15:35.531 "data_size": 63488 00:15:35.531 }, 00:15:35.531 { 00:15:35.531 "name": "BaseBdev3", 00:15:35.531 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:35.531 "is_configured": true, 00:15:35.531 "data_offset": 2048, 00:15:35.531 "data_size": 63488 00:15:35.531 } 00:15:35.531 ] 00:15:35.531 }' 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.531 08:53:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.105 [2024-09-28 08:53:13.793187] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:36.105 [2024-09-28 08:53:13.793319] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:36.105 [2024-09-28 08:53:13.793438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.675 "name": "raid_bdev1", 00:15:36.675 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:36.675 "strip_size_kb": 64, 00:15:36.675 "state": "online", 00:15:36.675 "raid_level": "raid5f", 00:15:36.675 "superblock": true, 00:15:36.675 "num_base_bdevs": 3, 00:15:36.675 "num_base_bdevs_discovered": 3, 00:15:36.675 "num_base_bdevs_operational": 3, 00:15:36.675 "base_bdevs_list": [ 00:15:36.675 { 00:15:36.675 "name": "spare", 00:15:36.675 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 }, 00:15:36.675 { 00:15:36.675 "name": "BaseBdev2", 00:15:36.675 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 }, 00:15:36.675 { 00:15:36.675 "name": "BaseBdev3", 00:15:36.675 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 } 00:15:36.675 ] 00:15:36.675 }' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.675 "name": "raid_bdev1", 00:15:36.675 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:36.675 "strip_size_kb": 64, 00:15:36.675 "state": "online", 00:15:36.675 "raid_level": "raid5f", 00:15:36.675 "superblock": true, 00:15:36.675 "num_base_bdevs": 3, 00:15:36.675 "num_base_bdevs_discovered": 3, 00:15:36.675 "num_base_bdevs_operational": 3, 00:15:36.675 "base_bdevs_list": [ 00:15:36.675 { 00:15:36.675 "name": "spare", 00:15:36.675 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 }, 00:15:36.675 { 00:15:36.675 "name": "BaseBdev2", 00:15:36.675 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 }, 00:15:36.675 { 00:15:36.675 "name": "BaseBdev3", 00:15:36.675 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:36.675 "is_configured": true, 00:15:36.675 "data_offset": 2048, 00:15:36.675 "data_size": 63488 00:15:36.675 } 00:15:36.675 ] 00:15:36.675 }' 00:15:36.675 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.935 "name": "raid_bdev1", 00:15:36.935 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:36.935 "strip_size_kb": 64, 00:15:36.935 "state": "online", 00:15:36.935 "raid_level": "raid5f", 00:15:36.935 "superblock": true, 00:15:36.935 "num_base_bdevs": 3, 00:15:36.935 "num_base_bdevs_discovered": 3, 00:15:36.935 "num_base_bdevs_operational": 3, 00:15:36.935 "base_bdevs_list": [ 00:15:36.935 { 00:15:36.935 "name": "spare", 00:15:36.935 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:36.935 "is_configured": true, 00:15:36.935 "data_offset": 2048, 00:15:36.935 "data_size": 63488 00:15:36.935 }, 00:15:36.935 { 00:15:36.935 "name": "BaseBdev2", 00:15:36.935 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:36.935 "is_configured": true, 00:15:36.935 "data_offset": 2048, 00:15:36.935 "data_size": 63488 00:15:36.935 }, 00:15:36.935 { 00:15:36.935 "name": "BaseBdev3", 00:15:36.935 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:36.935 "is_configured": true, 00:15:36.935 "data_offset": 2048, 00:15:36.935 "data_size": 63488 00:15:36.935 } 00:15:36.935 ] 00:15:36.935 }' 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.935 08:53:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.194 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.194 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.194 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.194 [2024-09-28 08:53:15.183254] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.195 [2024-09-28 08:53:15.183352] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.195 [2024-09-28 08:53:15.183449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.195 [2024-09-28 08:53:15.183531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.195 [2024-09-28 08:53:15.183548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.195 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.454 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:37.454 /dev/nbd0 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.716 1+0 records in 00:15:37.716 1+0 records out 00:15:37.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374403 s, 10.9 MB/s 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.716 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:37.717 /dev/nbd1 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.717 1+0 records in 00:15:37.717 1+0 records out 00:15:37.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392897 s, 10.4 MB/s 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:37.717 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.977 08:53:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.238 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.498 [2024-09-28 08:53:16.331028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:38.498 [2024-09-28 08:53:16.331108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.498 [2024-09-28 08:53:16.331131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:38.498 [2024-09-28 08:53:16.331142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.498 [2024-09-28 08:53:16.333528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.498 spare 00:15:38.498 [2024-09-28 08:53:16.333615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:38.498 [2024-09-28 08:53:16.333728] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:38.498 [2024-09-28 08:53:16.333799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.498 [2024-09-28 08:53:16.333989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.498 [2024-09-28 08:53:16.334102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.498 [2024-09-28 08:53:16.433992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:38.498 [2024-09-28 08:53:16.434020] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.498 [2024-09-28 08:53:16.434282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:38.498 [2024-09-28 08:53:16.439663] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:38.498 [2024-09-28 08:53:16.439684] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:38.498 [2024-09-28 08:53:16.439858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.498 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.763 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.763 "name": "raid_bdev1", 00:15:38.763 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:38.763 "strip_size_kb": 64, 00:15:38.763 "state": "online", 00:15:38.763 "raid_level": "raid5f", 00:15:38.763 "superblock": true, 00:15:38.763 "num_base_bdevs": 3, 00:15:38.763 "num_base_bdevs_discovered": 3, 00:15:38.763 "num_base_bdevs_operational": 3, 00:15:38.763 "base_bdevs_list": [ 00:15:38.763 { 00:15:38.763 "name": "spare", 00:15:38.763 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:38.763 "is_configured": true, 00:15:38.763 "data_offset": 2048, 00:15:38.763 "data_size": 63488 00:15:38.763 }, 00:15:38.763 { 00:15:38.763 "name": "BaseBdev2", 00:15:38.763 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:38.763 "is_configured": true, 00:15:38.763 "data_offset": 2048, 00:15:38.763 "data_size": 63488 00:15:38.763 }, 00:15:38.763 { 00:15:38.763 "name": "BaseBdev3", 00:15:38.763 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:38.763 "is_configured": true, 00:15:38.763 "data_offset": 2048, 00:15:38.763 "data_size": 63488 00:15:38.763 } 00:15:38.763 ] 00:15:38.763 }' 00:15:38.763 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.763 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.062 "name": "raid_bdev1", 00:15:39.062 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:39.062 "strip_size_kb": 64, 00:15:39.062 "state": "online", 00:15:39.062 "raid_level": "raid5f", 00:15:39.062 "superblock": true, 00:15:39.062 "num_base_bdevs": 3, 00:15:39.062 "num_base_bdevs_discovered": 3, 00:15:39.062 "num_base_bdevs_operational": 3, 00:15:39.062 "base_bdevs_list": [ 00:15:39.062 { 00:15:39.062 "name": "spare", 00:15:39.062 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:39.062 "is_configured": true, 00:15:39.062 "data_offset": 2048, 00:15:39.062 "data_size": 63488 00:15:39.062 }, 00:15:39.062 { 00:15:39.062 "name": "BaseBdev2", 00:15:39.062 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:39.062 "is_configured": true, 00:15:39.062 "data_offset": 2048, 00:15:39.062 "data_size": 63488 00:15:39.062 }, 00:15:39.062 { 00:15:39.062 "name": "BaseBdev3", 00:15:39.062 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:39.062 "is_configured": true, 00:15:39.062 "data_offset": 2048, 00:15:39.062 "data_size": 63488 00:15:39.062 } 00:15:39.062 ] 00:15:39.062 }' 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.062 08:53:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 [2024-09-28 08:53:17.089612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.353 "name": "raid_bdev1", 00:15:39.353 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:39.353 "strip_size_kb": 64, 00:15:39.353 "state": "online", 00:15:39.353 "raid_level": "raid5f", 00:15:39.353 "superblock": true, 00:15:39.353 "num_base_bdevs": 3, 00:15:39.353 "num_base_bdevs_discovered": 2, 00:15:39.353 "num_base_bdevs_operational": 2, 00:15:39.353 "base_bdevs_list": [ 00:15:39.353 { 00:15:39.353 "name": null, 00:15:39.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.353 "is_configured": false, 00:15:39.353 "data_offset": 0, 00:15:39.353 "data_size": 63488 00:15:39.353 }, 00:15:39.353 { 00:15:39.353 "name": "BaseBdev2", 00:15:39.353 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:39.353 "is_configured": true, 00:15:39.353 "data_offset": 2048, 00:15:39.353 "data_size": 63488 00:15:39.353 }, 00:15:39.353 { 00:15:39.353 "name": "BaseBdev3", 00:15:39.353 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:39.353 "is_configured": true, 00:15:39.353 "data_offset": 2048, 00:15:39.353 "data_size": 63488 00:15:39.353 } 00:15:39.353 ] 00:15:39.353 }' 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.353 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.631 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.631 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.631 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.631 [2024-09-28 08:53:17.548850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.631 [2024-09-28 08:53:17.549066] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.631 [2024-09-28 08:53:17.549126] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:39.631 [2024-09-28 08:53:17.549187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.631 [2024-09-28 08:53:17.563626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:39.631 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.631 08:53:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:39.631 [2024-09-28 08:53:17.571091] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.582 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.582 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.582 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.582 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.582 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.842 "name": "raid_bdev1", 00:15:40.842 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:40.842 "strip_size_kb": 64, 00:15:40.842 "state": "online", 00:15:40.842 "raid_level": "raid5f", 00:15:40.842 "superblock": true, 00:15:40.842 "num_base_bdevs": 3, 00:15:40.842 "num_base_bdevs_discovered": 3, 00:15:40.842 "num_base_bdevs_operational": 3, 00:15:40.842 "process": { 00:15:40.842 "type": "rebuild", 00:15:40.842 "target": "spare", 00:15:40.842 "progress": { 00:15:40.842 "blocks": 20480, 00:15:40.842 "percent": 16 00:15:40.842 } 00:15:40.842 }, 00:15:40.842 "base_bdevs_list": [ 00:15:40.842 { 00:15:40.842 "name": "spare", 00:15:40.842 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "name": "BaseBdev2", 00:15:40.842 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 }, 00:15:40.842 { 00:15:40.842 "name": "BaseBdev3", 00:15:40.842 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:40.842 "is_configured": true, 00:15:40.842 "data_offset": 2048, 00:15:40.842 "data_size": 63488 00:15:40.842 } 00:15:40.842 ] 00:15:40.842 }' 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.842 [2024-09-28 08:53:18.726349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.842 [2024-09-28 08:53:18.780192] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.842 [2024-09-28 08:53:18.780255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.842 [2024-09-28 08:53:18.780271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.842 [2024-09-28 08:53:18.780280] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.842 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.102 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.102 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.102 "name": "raid_bdev1", 00:15:41.102 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:41.102 "strip_size_kb": 64, 00:15:41.102 "state": "online", 00:15:41.102 "raid_level": "raid5f", 00:15:41.102 "superblock": true, 00:15:41.102 "num_base_bdevs": 3, 00:15:41.102 "num_base_bdevs_discovered": 2, 00:15:41.102 "num_base_bdevs_operational": 2, 00:15:41.102 "base_bdevs_list": [ 00:15:41.102 { 00:15:41.102 "name": null, 00:15:41.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.102 "is_configured": false, 00:15:41.102 "data_offset": 0, 00:15:41.102 "data_size": 63488 00:15:41.102 }, 00:15:41.102 { 00:15:41.102 "name": "BaseBdev2", 00:15:41.102 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:41.102 "is_configured": true, 00:15:41.102 "data_offset": 2048, 00:15:41.102 "data_size": 63488 00:15:41.102 }, 00:15:41.102 { 00:15:41.102 "name": "BaseBdev3", 00:15:41.102 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:41.102 "is_configured": true, 00:15:41.102 "data_offset": 2048, 00:15:41.102 "data_size": 63488 00:15:41.102 } 00:15:41.102 ] 00:15:41.102 }' 00:15:41.102 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.102 08:53:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.362 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.362 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 [2024-09-28 08:53:19.305772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.362 [2024-09-28 08:53:19.305876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.362 [2024-09-28 08:53:19.305914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:41.362 [2024-09-28 08:53:19.305949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.362 [2024-09-28 08:53:19.306484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.362 [2024-09-28 08:53:19.306542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.362 [2024-09-28 08:53:19.306667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.362 [2024-09-28 08:53:19.306703] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.362 [2024-09-28 08:53:19.306744] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:41.362 [2024-09-28 08:53:19.306790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.362 [2024-09-28 08:53:19.320533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:41.362 spare 00:15:41.362 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.362 08:53:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:41.362 [2024-09-28 08:53:19.327868] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.743 "name": "raid_bdev1", 00:15:42.743 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:42.743 "strip_size_kb": 64, 00:15:42.743 "state": "online", 00:15:42.743 "raid_level": "raid5f", 00:15:42.743 "superblock": true, 00:15:42.743 "num_base_bdevs": 3, 00:15:42.743 "num_base_bdevs_discovered": 3, 00:15:42.743 "num_base_bdevs_operational": 3, 00:15:42.743 "process": { 00:15:42.743 "type": "rebuild", 00:15:42.743 "target": "spare", 00:15:42.743 "progress": { 00:15:42.743 "blocks": 20480, 00:15:42.743 "percent": 16 00:15:42.743 } 00:15:42.743 }, 00:15:42.743 "base_bdevs_list": [ 00:15:42.743 { 00:15:42.743 "name": "spare", 00:15:42.743 "uuid": "953b822e-8da7-5b2a-b356-e1e5d8599fe2", 00:15:42.743 "is_configured": true, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 }, 00:15:42.743 { 00:15:42.743 "name": "BaseBdev2", 00:15:42.743 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:42.743 "is_configured": true, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 }, 00:15:42.743 { 00:15:42.743 "name": "BaseBdev3", 00:15:42.743 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:42.743 "is_configured": true, 00:15:42.743 "data_offset": 2048, 00:15:42.743 "data_size": 63488 00:15:42.743 } 00:15:42.743 ] 00:15:42.743 }' 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.743 [2024-09-28 08:53:20.486983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.743 [2024-09-28 08:53:20.536822] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.743 [2024-09-28 08:53:20.536876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.743 [2024-09-28 08:53:20.536906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.743 [2024-09-28 08:53:20.536930] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.743 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.743 "name": "raid_bdev1", 00:15:42.743 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:42.743 "strip_size_kb": 64, 00:15:42.743 "state": "online", 00:15:42.743 "raid_level": "raid5f", 00:15:42.743 "superblock": true, 00:15:42.744 "num_base_bdevs": 3, 00:15:42.744 "num_base_bdevs_discovered": 2, 00:15:42.744 "num_base_bdevs_operational": 2, 00:15:42.744 "base_bdevs_list": [ 00:15:42.744 { 00:15:42.744 "name": null, 00:15:42.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.744 "is_configured": false, 00:15:42.744 "data_offset": 0, 00:15:42.744 "data_size": 63488 00:15:42.744 }, 00:15:42.744 { 00:15:42.744 "name": "BaseBdev2", 00:15:42.744 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:42.744 "is_configured": true, 00:15:42.744 "data_offset": 2048, 00:15:42.744 "data_size": 63488 00:15:42.744 }, 00:15:42.744 { 00:15:42.744 "name": "BaseBdev3", 00:15:42.744 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:42.744 "is_configured": true, 00:15:42.744 "data_offset": 2048, 00:15:42.744 "data_size": 63488 00:15:42.744 } 00:15:42.744 ] 00:15:42.744 }' 00:15:42.744 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.744 08:53:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.313 "name": "raid_bdev1", 00:15:43.313 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:43.313 "strip_size_kb": 64, 00:15:43.313 "state": "online", 00:15:43.313 "raid_level": "raid5f", 00:15:43.313 "superblock": true, 00:15:43.313 "num_base_bdevs": 3, 00:15:43.313 "num_base_bdevs_discovered": 2, 00:15:43.313 "num_base_bdevs_operational": 2, 00:15:43.313 "base_bdevs_list": [ 00:15:43.313 { 00:15:43.313 "name": null, 00:15:43.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.313 "is_configured": false, 00:15:43.313 "data_offset": 0, 00:15:43.313 "data_size": 63488 00:15:43.313 }, 00:15:43.313 { 00:15:43.313 "name": "BaseBdev2", 00:15:43.313 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:43.313 "is_configured": true, 00:15:43.313 "data_offset": 2048, 00:15:43.313 "data_size": 63488 00:15:43.313 }, 00:15:43.313 { 00:15:43.313 "name": "BaseBdev3", 00:15:43.313 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:43.313 "is_configured": true, 00:15:43.313 "data_offset": 2048, 00:15:43.313 "data_size": 63488 00:15:43.313 } 00:15:43.313 ] 00:15:43.313 }' 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.313 [2024-09-28 08:53:21.134356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:43.313 [2024-09-28 08:53:21.134450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.313 [2024-09-28 08:53:21.134479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:43.313 [2024-09-28 08:53:21.134489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.313 [2024-09-28 08:53:21.135003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.313 [2024-09-28 08:53:21.135022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:43.313 [2024-09-28 08:53:21.135109] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:43.313 [2024-09-28 08:53:21.135123] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.313 [2024-09-28 08:53:21.135136] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:43.313 [2024-09-28 08:53:21.135151] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:43.313 BaseBdev1 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.313 08:53:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.253 "name": "raid_bdev1", 00:15:44.253 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:44.253 "strip_size_kb": 64, 00:15:44.253 "state": "online", 00:15:44.253 "raid_level": "raid5f", 00:15:44.253 "superblock": true, 00:15:44.253 "num_base_bdevs": 3, 00:15:44.253 "num_base_bdevs_discovered": 2, 00:15:44.253 "num_base_bdevs_operational": 2, 00:15:44.253 "base_bdevs_list": [ 00:15:44.253 { 00:15:44.253 "name": null, 00:15:44.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.253 "is_configured": false, 00:15:44.253 "data_offset": 0, 00:15:44.253 "data_size": 63488 00:15:44.253 }, 00:15:44.253 { 00:15:44.253 "name": "BaseBdev2", 00:15:44.253 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:44.253 "is_configured": true, 00:15:44.253 "data_offset": 2048, 00:15:44.253 "data_size": 63488 00:15:44.253 }, 00:15:44.253 { 00:15:44.253 "name": "BaseBdev3", 00:15:44.253 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:44.253 "is_configured": true, 00:15:44.253 "data_offset": 2048, 00:15:44.253 "data_size": 63488 00:15:44.253 } 00:15:44.253 ] 00:15:44.253 }' 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.253 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.824 "name": "raid_bdev1", 00:15:44.824 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:44.824 "strip_size_kb": 64, 00:15:44.824 "state": "online", 00:15:44.824 "raid_level": "raid5f", 00:15:44.824 "superblock": true, 00:15:44.824 "num_base_bdevs": 3, 00:15:44.824 "num_base_bdevs_discovered": 2, 00:15:44.824 "num_base_bdevs_operational": 2, 00:15:44.824 "base_bdevs_list": [ 00:15:44.824 { 00:15:44.824 "name": null, 00:15:44.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.824 "is_configured": false, 00:15:44.824 "data_offset": 0, 00:15:44.824 "data_size": 63488 00:15:44.824 }, 00:15:44.824 { 00:15:44.824 "name": "BaseBdev2", 00:15:44.824 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:44.824 "is_configured": true, 00:15:44.824 "data_offset": 2048, 00:15:44.824 "data_size": 63488 00:15:44.824 }, 00:15:44.824 { 00:15:44.824 "name": "BaseBdev3", 00:15:44.824 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:44.824 "is_configured": true, 00:15:44.824 "data_offset": 2048, 00:15:44.824 "data_size": 63488 00:15:44.824 } 00:15:44.824 ] 00:15:44.824 }' 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.824 [2024-09-28 08:53:22.731631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.824 [2024-09-28 08:53:22.731782] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.824 [2024-09-28 08:53:22.731798] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:44.824 request: 00:15:44.824 { 00:15:44.824 "base_bdev": "BaseBdev1", 00:15:44.824 "raid_bdev": "raid_bdev1", 00:15:44.824 "method": "bdev_raid_add_base_bdev", 00:15:44.824 "req_id": 1 00:15:44.824 } 00:15:44.824 Got JSON-RPC error response 00:15:44.824 response: 00:15:44.824 { 00:15:44.824 "code": -22, 00:15:44.824 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:44.824 } 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:44.824 08:53:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.762 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.021 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.021 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.021 "name": "raid_bdev1", 00:15:46.021 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:46.021 "strip_size_kb": 64, 00:15:46.021 "state": "online", 00:15:46.021 "raid_level": "raid5f", 00:15:46.021 "superblock": true, 00:15:46.021 "num_base_bdevs": 3, 00:15:46.021 "num_base_bdevs_discovered": 2, 00:15:46.021 "num_base_bdevs_operational": 2, 00:15:46.021 "base_bdevs_list": [ 00:15:46.021 { 00:15:46.021 "name": null, 00:15:46.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.021 "is_configured": false, 00:15:46.021 "data_offset": 0, 00:15:46.021 "data_size": 63488 00:15:46.021 }, 00:15:46.021 { 00:15:46.021 "name": "BaseBdev2", 00:15:46.021 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:46.021 "is_configured": true, 00:15:46.021 "data_offset": 2048, 00:15:46.021 "data_size": 63488 00:15:46.021 }, 00:15:46.021 { 00:15:46.021 "name": "BaseBdev3", 00:15:46.021 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:46.021 "is_configured": true, 00:15:46.021 "data_offset": 2048, 00:15:46.021 "data_size": 63488 00:15:46.021 } 00:15:46.021 ] 00:15:46.021 }' 00:15:46.021 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.021 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.280 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.280 "name": "raid_bdev1", 00:15:46.280 "uuid": "c82c51ec-58d4-45b9-ba2a-f4ec2d456027", 00:15:46.280 "strip_size_kb": 64, 00:15:46.280 "state": "online", 00:15:46.280 "raid_level": "raid5f", 00:15:46.280 "superblock": true, 00:15:46.280 "num_base_bdevs": 3, 00:15:46.280 "num_base_bdevs_discovered": 2, 00:15:46.280 "num_base_bdevs_operational": 2, 00:15:46.280 "base_bdevs_list": [ 00:15:46.280 { 00:15:46.280 "name": null, 00:15:46.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.280 "is_configured": false, 00:15:46.280 "data_offset": 0, 00:15:46.280 "data_size": 63488 00:15:46.280 }, 00:15:46.280 { 00:15:46.281 "name": "BaseBdev2", 00:15:46.281 "uuid": "f77dcc24-20b9-5291-bf38-b34d51f2e86a", 00:15:46.281 "is_configured": true, 00:15:46.281 "data_offset": 2048, 00:15:46.281 "data_size": 63488 00:15:46.281 }, 00:15:46.281 { 00:15:46.281 "name": "BaseBdev3", 00:15:46.281 "uuid": "7b6d9a56-ba88-5c8a-8a5c-9609a6f6d76e", 00:15:46.281 "is_configured": true, 00:15:46.281 "data_offset": 2048, 00:15:46.281 "data_size": 63488 00:15:46.281 } 00:15:46.281 ] 00:15:46.281 }' 00:15:46.281 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82007 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82007 ']' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82007 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82007 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.539 killing process with pid 82007 00:15:46.539 Received shutdown signal, test time was about 60.000000 seconds 00:15:46.539 00:15:46.539 Latency(us) 00:15:46.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.539 =================================================================================================================== 00:15:46.539 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82007' 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82007 00:15:46.539 [2024-09-28 08:53:24.390964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.539 [2024-09-28 08:53:24.391080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.539 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82007 00:15:46.539 [2024-09-28 08:53:24.391139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.539 [2024-09-28 08:53:24.391151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:47.107 [2024-09-28 08:53:24.799398] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.490 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:48.490 ************************************ 00:15:48.490 END TEST raid5f_rebuild_test_sb 00:15:48.490 ************************************ 00:15:48.490 00:15:48.490 real 0m23.749s 00:15:48.490 user 0m30.075s 00:15:48.490 sys 0m3.279s 00:15:48.490 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.490 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.490 08:53:26 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:48.490 08:53:26 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:48.490 08:53:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:48.490 08:53:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.490 08:53:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.490 ************************************ 00:15:48.490 START TEST raid5f_state_function_test 00:15:48.490 ************************************ 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:48.490 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82775 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82775' 00:15:48.491 Process raid pid: 82775 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82775 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82775 ']' 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.491 08:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.491 [2024-09-28 08:53:26.292478] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:48.491 [2024-09-28 08:53:26.293220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.491 [2024-09-28 08:53:26.463823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.751 [2024-09-28 08:53:26.703953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.010 [2024-09-28 08:53:26.938000] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.010 [2024-09-28 08:53:26.938108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.270 [2024-09-28 08:53:27.107924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.270 [2024-09-28 08:53:27.108017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.270 [2024-09-28 08:53:27.108046] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.270 [2024-09-28 08:53:27.108069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.270 [2024-09-28 08:53:27.108087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.270 [2024-09-28 08:53:27.108110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.270 [2024-09-28 08:53:27.108118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.270 [2024-09-28 08:53:27.108129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.270 "name": "Existed_Raid", 00:15:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.270 "strip_size_kb": 64, 00:15:49.270 "state": "configuring", 00:15:49.270 "raid_level": "raid5f", 00:15:49.270 "superblock": false, 00:15:49.270 "num_base_bdevs": 4, 00:15:49.270 "num_base_bdevs_discovered": 0, 00:15:49.270 "num_base_bdevs_operational": 4, 00:15:49.270 "base_bdevs_list": [ 00:15:49.270 { 00:15:49.270 "name": "BaseBdev1", 00:15:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.270 "is_configured": false, 00:15:49.270 "data_offset": 0, 00:15:49.270 "data_size": 0 00:15:49.270 }, 00:15:49.270 { 00:15:49.270 "name": "BaseBdev2", 00:15:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.270 "is_configured": false, 00:15:49.270 "data_offset": 0, 00:15:49.270 "data_size": 0 00:15:49.270 }, 00:15:49.270 { 00:15:49.270 "name": "BaseBdev3", 00:15:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.270 "is_configured": false, 00:15:49.270 "data_offset": 0, 00:15:49.270 "data_size": 0 00:15:49.270 }, 00:15:49.270 { 00:15:49.270 "name": "BaseBdev4", 00:15:49.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.270 "is_configured": false, 00:15:49.270 "data_offset": 0, 00:15:49.270 "data_size": 0 00:15:49.270 } 00:15:49.270 ] 00:15:49.270 }' 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.270 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 [2024-09-28 08:53:27.547082] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.839 [2024-09-28 08:53:27.547121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 [2024-09-28 08:53:27.559093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.839 [2024-09-28 08:53:27.559134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.839 [2024-09-28 08:53:27.559142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.839 [2024-09-28 08:53:27.559151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.839 [2024-09-28 08:53:27.559156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.839 [2024-09-28 08:53:27.559165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.839 [2024-09-28 08:53:27.559170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.839 [2024-09-28 08:53:27.559178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 [2024-09-28 08:53:27.647392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.839 BaseBdev1 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.839 [ 00:15:49.839 { 00:15:49.839 "name": "BaseBdev1", 00:15:49.839 "aliases": [ 00:15:49.839 "6908f815-54f5-4450-8f51-0469d77cce9e" 00:15:49.839 ], 00:15:49.839 "product_name": "Malloc disk", 00:15:49.839 "block_size": 512, 00:15:49.839 "num_blocks": 65536, 00:15:49.839 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:49.839 "assigned_rate_limits": { 00:15:49.839 "rw_ios_per_sec": 0, 00:15:49.839 "rw_mbytes_per_sec": 0, 00:15:49.839 "r_mbytes_per_sec": 0, 00:15:49.839 "w_mbytes_per_sec": 0 00:15:49.839 }, 00:15:49.839 "claimed": true, 00:15:49.839 "claim_type": "exclusive_write", 00:15:49.839 "zoned": false, 00:15:49.839 "supported_io_types": { 00:15:49.839 "read": true, 00:15:49.839 "write": true, 00:15:49.839 "unmap": true, 00:15:49.839 "flush": true, 00:15:49.839 "reset": true, 00:15:49.839 "nvme_admin": false, 00:15:49.839 "nvme_io": false, 00:15:49.839 "nvme_io_md": false, 00:15:49.839 "write_zeroes": true, 00:15:49.839 "zcopy": true, 00:15:49.839 "get_zone_info": false, 00:15:49.839 "zone_management": false, 00:15:49.839 "zone_append": false, 00:15:49.839 "compare": false, 00:15:49.839 "compare_and_write": false, 00:15:49.839 "abort": true, 00:15:49.839 "seek_hole": false, 00:15:49.839 "seek_data": false, 00:15:49.839 "copy": true, 00:15:49.839 "nvme_iov_md": false 00:15:49.839 }, 00:15:49.839 "memory_domains": [ 00:15:49.839 { 00:15:49.839 "dma_device_id": "system", 00:15:49.839 "dma_device_type": 1 00:15:49.839 }, 00:15:49.839 { 00:15:49.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.839 "dma_device_type": 2 00:15:49.839 } 00:15:49.839 ], 00:15:49.839 "driver_specific": {} 00:15:49.839 } 00:15:49.839 ] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.839 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.840 "name": "Existed_Raid", 00:15:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.840 "strip_size_kb": 64, 00:15:49.840 "state": "configuring", 00:15:49.840 "raid_level": "raid5f", 00:15:49.840 "superblock": false, 00:15:49.840 "num_base_bdevs": 4, 00:15:49.840 "num_base_bdevs_discovered": 1, 00:15:49.840 "num_base_bdevs_operational": 4, 00:15:49.840 "base_bdevs_list": [ 00:15:49.840 { 00:15:49.840 "name": "BaseBdev1", 00:15:49.840 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:49.840 "is_configured": true, 00:15:49.840 "data_offset": 0, 00:15:49.840 "data_size": 65536 00:15:49.840 }, 00:15:49.840 { 00:15:49.840 "name": "BaseBdev2", 00:15:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.840 "is_configured": false, 00:15:49.840 "data_offset": 0, 00:15:49.840 "data_size": 0 00:15:49.840 }, 00:15:49.840 { 00:15:49.840 "name": "BaseBdev3", 00:15:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.840 "is_configured": false, 00:15:49.840 "data_offset": 0, 00:15:49.840 "data_size": 0 00:15:49.840 }, 00:15:49.840 { 00:15:49.840 "name": "BaseBdev4", 00:15:49.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.840 "is_configured": false, 00:15:49.840 "data_offset": 0, 00:15:49.840 "data_size": 0 00:15:49.840 } 00:15:49.840 ] 00:15:49.840 }' 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.840 08:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.099 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.099 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.099 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.099 [2024-09-28 08:53:28.090616] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.099 [2024-09-28 08:53:28.090737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.358 [2024-09-28 08:53:28.102659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.358 [2024-09-28 08:53:28.104687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.358 [2024-09-28 08:53:28.104769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.358 [2024-09-28 08:53:28.104784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.358 [2024-09-28 08:53:28.104795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.358 [2024-09-28 08:53:28.104802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.358 [2024-09-28 08:53:28.104809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.358 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.359 "name": "Existed_Raid", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.359 "strip_size_kb": 64, 00:15:50.359 "state": "configuring", 00:15:50.359 "raid_level": "raid5f", 00:15:50.359 "superblock": false, 00:15:50.359 "num_base_bdevs": 4, 00:15:50.359 "num_base_bdevs_discovered": 1, 00:15:50.359 "num_base_bdevs_operational": 4, 00:15:50.359 "base_bdevs_list": [ 00:15:50.359 { 00:15:50.359 "name": "BaseBdev1", 00:15:50.359 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:50.359 "is_configured": true, 00:15:50.359 "data_offset": 0, 00:15:50.359 "data_size": 65536 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "name": "BaseBdev2", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.359 "is_configured": false, 00:15:50.359 "data_offset": 0, 00:15:50.359 "data_size": 0 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "name": "BaseBdev3", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.359 "is_configured": false, 00:15:50.359 "data_offset": 0, 00:15:50.359 "data_size": 0 00:15:50.359 }, 00:15:50.359 { 00:15:50.359 "name": "BaseBdev4", 00:15:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.359 "is_configured": false, 00:15:50.359 "data_offset": 0, 00:15:50.359 "data_size": 0 00:15:50.359 } 00:15:50.359 ] 00:15:50.359 }' 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.359 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.619 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:50.619 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.619 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 [2024-09-28 08:53:28.632251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.879 BaseBdev2 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 [ 00:15:50.879 { 00:15:50.879 "name": "BaseBdev2", 00:15:50.879 "aliases": [ 00:15:50.879 "a93ae389-7cf0-47dc-8385-95207f7a27bd" 00:15:50.879 ], 00:15:50.879 "product_name": "Malloc disk", 00:15:50.879 "block_size": 512, 00:15:50.879 "num_blocks": 65536, 00:15:50.879 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:50.879 "assigned_rate_limits": { 00:15:50.879 "rw_ios_per_sec": 0, 00:15:50.879 "rw_mbytes_per_sec": 0, 00:15:50.879 "r_mbytes_per_sec": 0, 00:15:50.879 "w_mbytes_per_sec": 0 00:15:50.879 }, 00:15:50.879 "claimed": true, 00:15:50.879 "claim_type": "exclusive_write", 00:15:50.879 "zoned": false, 00:15:50.879 "supported_io_types": { 00:15:50.879 "read": true, 00:15:50.879 "write": true, 00:15:50.879 "unmap": true, 00:15:50.879 "flush": true, 00:15:50.879 "reset": true, 00:15:50.879 "nvme_admin": false, 00:15:50.879 "nvme_io": false, 00:15:50.879 "nvme_io_md": false, 00:15:50.879 "write_zeroes": true, 00:15:50.879 "zcopy": true, 00:15:50.879 "get_zone_info": false, 00:15:50.879 "zone_management": false, 00:15:50.879 "zone_append": false, 00:15:50.879 "compare": false, 00:15:50.879 "compare_and_write": false, 00:15:50.879 "abort": true, 00:15:50.879 "seek_hole": false, 00:15:50.879 "seek_data": false, 00:15:50.879 "copy": true, 00:15:50.879 "nvme_iov_md": false 00:15:50.879 }, 00:15:50.879 "memory_domains": [ 00:15:50.879 { 00:15:50.879 "dma_device_id": "system", 00:15:50.879 "dma_device_type": 1 00:15:50.879 }, 00:15:50.879 { 00:15:50.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.879 "dma_device_type": 2 00:15:50.879 } 00:15:50.879 ], 00:15:50.879 "driver_specific": {} 00:15:50.879 } 00:15:50.879 ] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.879 "name": "Existed_Raid", 00:15:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.879 "strip_size_kb": 64, 00:15:50.879 "state": "configuring", 00:15:50.879 "raid_level": "raid5f", 00:15:50.879 "superblock": false, 00:15:50.879 "num_base_bdevs": 4, 00:15:50.879 "num_base_bdevs_discovered": 2, 00:15:50.879 "num_base_bdevs_operational": 4, 00:15:50.879 "base_bdevs_list": [ 00:15:50.879 { 00:15:50.879 "name": "BaseBdev1", 00:15:50.879 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:50.879 "is_configured": true, 00:15:50.879 "data_offset": 0, 00:15:50.879 "data_size": 65536 00:15:50.879 }, 00:15:50.879 { 00:15:50.879 "name": "BaseBdev2", 00:15:50.879 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:50.879 "is_configured": true, 00:15:50.879 "data_offset": 0, 00:15:50.879 "data_size": 65536 00:15:50.879 }, 00:15:50.879 { 00:15:50.879 "name": "BaseBdev3", 00:15:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.879 "is_configured": false, 00:15:50.879 "data_offset": 0, 00:15:50.879 "data_size": 0 00:15:50.879 }, 00:15:50.879 { 00:15:50.879 "name": "BaseBdev4", 00:15:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.879 "is_configured": false, 00:15:50.879 "data_offset": 0, 00:15:50.879 "data_size": 0 00:15:50.879 } 00:15:50.879 ] 00:15:50.879 }' 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.879 08:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.450 [2024-09-28 08:53:29.191856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.450 BaseBdev3 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.450 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.450 [ 00:15:51.450 { 00:15:51.450 "name": "BaseBdev3", 00:15:51.450 "aliases": [ 00:15:51.450 "1741cb94-f48d-4295-909a-26a6f899dbe6" 00:15:51.450 ], 00:15:51.450 "product_name": "Malloc disk", 00:15:51.450 "block_size": 512, 00:15:51.450 "num_blocks": 65536, 00:15:51.450 "uuid": "1741cb94-f48d-4295-909a-26a6f899dbe6", 00:15:51.450 "assigned_rate_limits": { 00:15:51.450 "rw_ios_per_sec": 0, 00:15:51.450 "rw_mbytes_per_sec": 0, 00:15:51.450 "r_mbytes_per_sec": 0, 00:15:51.450 "w_mbytes_per_sec": 0 00:15:51.450 }, 00:15:51.450 "claimed": true, 00:15:51.450 "claim_type": "exclusive_write", 00:15:51.450 "zoned": false, 00:15:51.450 "supported_io_types": { 00:15:51.450 "read": true, 00:15:51.450 "write": true, 00:15:51.450 "unmap": true, 00:15:51.450 "flush": true, 00:15:51.450 "reset": true, 00:15:51.450 "nvme_admin": false, 00:15:51.450 "nvme_io": false, 00:15:51.450 "nvme_io_md": false, 00:15:51.451 "write_zeroes": true, 00:15:51.451 "zcopy": true, 00:15:51.451 "get_zone_info": false, 00:15:51.451 "zone_management": false, 00:15:51.451 "zone_append": false, 00:15:51.451 "compare": false, 00:15:51.451 "compare_and_write": false, 00:15:51.451 "abort": true, 00:15:51.451 "seek_hole": false, 00:15:51.451 "seek_data": false, 00:15:51.451 "copy": true, 00:15:51.451 "nvme_iov_md": false 00:15:51.451 }, 00:15:51.451 "memory_domains": [ 00:15:51.451 { 00:15:51.451 "dma_device_id": "system", 00:15:51.451 "dma_device_type": 1 00:15:51.451 }, 00:15:51.451 { 00:15:51.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.451 "dma_device_type": 2 00:15:51.451 } 00:15:51.451 ], 00:15:51.451 "driver_specific": {} 00:15:51.451 } 00:15:51.451 ] 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.451 "name": "Existed_Raid", 00:15:51.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.451 "strip_size_kb": 64, 00:15:51.451 "state": "configuring", 00:15:51.451 "raid_level": "raid5f", 00:15:51.451 "superblock": false, 00:15:51.451 "num_base_bdevs": 4, 00:15:51.451 "num_base_bdevs_discovered": 3, 00:15:51.451 "num_base_bdevs_operational": 4, 00:15:51.451 "base_bdevs_list": [ 00:15:51.451 { 00:15:51.451 "name": "BaseBdev1", 00:15:51.451 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:51.451 "is_configured": true, 00:15:51.451 "data_offset": 0, 00:15:51.451 "data_size": 65536 00:15:51.451 }, 00:15:51.451 { 00:15:51.451 "name": "BaseBdev2", 00:15:51.451 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:51.451 "is_configured": true, 00:15:51.451 "data_offset": 0, 00:15:51.451 "data_size": 65536 00:15:51.451 }, 00:15:51.451 { 00:15:51.451 "name": "BaseBdev3", 00:15:51.451 "uuid": "1741cb94-f48d-4295-909a-26a6f899dbe6", 00:15:51.451 "is_configured": true, 00:15:51.451 "data_offset": 0, 00:15:51.451 "data_size": 65536 00:15:51.451 }, 00:15:51.451 { 00:15:51.451 "name": "BaseBdev4", 00:15:51.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.451 "is_configured": false, 00:15:51.451 "data_offset": 0, 00:15:51.451 "data_size": 0 00:15:51.451 } 00:15:51.451 ] 00:15:51.451 }' 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.451 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.715 [2024-09-28 08:53:29.695117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.715 [2024-09-28 08:53:29.695188] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.715 [2024-09-28 08:53:29.695202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:51.715 [2024-09-28 08:53:29.695499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:51.715 [2024-09-28 08:53:29.702939] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.715 [2024-09-28 08:53:29.703026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:51.715 [2024-09-28 08:53:29.703344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.715 BaseBdev4 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.715 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.975 [ 00:15:51.975 { 00:15:51.975 "name": "BaseBdev4", 00:15:51.975 "aliases": [ 00:15:51.975 "caab33ca-2865-425b-9ae6-82a3d48a1335" 00:15:51.975 ], 00:15:51.975 "product_name": "Malloc disk", 00:15:51.975 "block_size": 512, 00:15:51.975 "num_blocks": 65536, 00:15:51.975 "uuid": "caab33ca-2865-425b-9ae6-82a3d48a1335", 00:15:51.975 "assigned_rate_limits": { 00:15:51.975 "rw_ios_per_sec": 0, 00:15:51.975 "rw_mbytes_per_sec": 0, 00:15:51.975 "r_mbytes_per_sec": 0, 00:15:51.975 "w_mbytes_per_sec": 0 00:15:51.975 }, 00:15:51.975 "claimed": true, 00:15:51.975 "claim_type": "exclusive_write", 00:15:51.975 "zoned": false, 00:15:51.975 "supported_io_types": { 00:15:51.975 "read": true, 00:15:51.975 "write": true, 00:15:51.975 "unmap": true, 00:15:51.975 "flush": true, 00:15:51.975 "reset": true, 00:15:51.975 "nvme_admin": false, 00:15:51.975 "nvme_io": false, 00:15:51.975 "nvme_io_md": false, 00:15:51.975 "write_zeroes": true, 00:15:51.975 "zcopy": true, 00:15:51.975 "get_zone_info": false, 00:15:51.975 "zone_management": false, 00:15:51.975 "zone_append": false, 00:15:51.975 "compare": false, 00:15:51.975 "compare_and_write": false, 00:15:51.975 "abort": true, 00:15:51.975 "seek_hole": false, 00:15:51.975 "seek_data": false, 00:15:51.975 "copy": true, 00:15:51.975 "nvme_iov_md": false 00:15:51.975 }, 00:15:51.975 "memory_domains": [ 00:15:51.975 { 00:15:51.975 "dma_device_id": "system", 00:15:51.975 "dma_device_type": 1 00:15:51.975 }, 00:15:51.975 { 00:15:51.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.975 "dma_device_type": 2 00:15:51.975 } 00:15:51.975 ], 00:15:51.975 "driver_specific": {} 00:15:51.975 } 00:15:51.975 ] 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.975 "name": "Existed_Raid", 00:15:51.975 "uuid": "b71bc529-8661-4b2e-a7ad-58ba28dfc051", 00:15:51.975 "strip_size_kb": 64, 00:15:51.975 "state": "online", 00:15:51.975 "raid_level": "raid5f", 00:15:51.975 "superblock": false, 00:15:51.975 "num_base_bdevs": 4, 00:15:51.975 "num_base_bdevs_discovered": 4, 00:15:51.975 "num_base_bdevs_operational": 4, 00:15:51.975 "base_bdevs_list": [ 00:15:51.975 { 00:15:51.975 "name": "BaseBdev1", 00:15:51.975 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:51.975 "is_configured": true, 00:15:51.975 "data_offset": 0, 00:15:51.975 "data_size": 65536 00:15:51.975 }, 00:15:51.975 { 00:15:51.975 "name": "BaseBdev2", 00:15:51.975 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:51.975 "is_configured": true, 00:15:51.975 "data_offset": 0, 00:15:51.975 "data_size": 65536 00:15:51.975 }, 00:15:51.975 { 00:15:51.975 "name": "BaseBdev3", 00:15:51.975 "uuid": "1741cb94-f48d-4295-909a-26a6f899dbe6", 00:15:51.975 "is_configured": true, 00:15:51.975 "data_offset": 0, 00:15:51.975 "data_size": 65536 00:15:51.975 }, 00:15:51.975 { 00:15:51.975 "name": "BaseBdev4", 00:15:51.975 "uuid": "caab33ca-2865-425b-9ae6-82a3d48a1335", 00:15:51.975 "is_configured": true, 00:15:51.975 "data_offset": 0, 00:15:51.975 "data_size": 65536 00:15:51.975 } 00:15:51.975 ] 00:15:51.975 }' 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.975 08:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.235 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.235 [2024-09-28 08:53:30.219640] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.494 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.494 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.494 "name": "Existed_Raid", 00:15:52.494 "aliases": [ 00:15:52.494 "b71bc529-8661-4b2e-a7ad-58ba28dfc051" 00:15:52.494 ], 00:15:52.494 "product_name": "Raid Volume", 00:15:52.494 "block_size": 512, 00:15:52.494 "num_blocks": 196608, 00:15:52.494 "uuid": "b71bc529-8661-4b2e-a7ad-58ba28dfc051", 00:15:52.494 "assigned_rate_limits": { 00:15:52.494 "rw_ios_per_sec": 0, 00:15:52.494 "rw_mbytes_per_sec": 0, 00:15:52.495 "r_mbytes_per_sec": 0, 00:15:52.495 "w_mbytes_per_sec": 0 00:15:52.495 }, 00:15:52.495 "claimed": false, 00:15:52.495 "zoned": false, 00:15:52.495 "supported_io_types": { 00:15:52.495 "read": true, 00:15:52.495 "write": true, 00:15:52.495 "unmap": false, 00:15:52.495 "flush": false, 00:15:52.495 "reset": true, 00:15:52.495 "nvme_admin": false, 00:15:52.495 "nvme_io": false, 00:15:52.495 "nvme_io_md": false, 00:15:52.495 "write_zeroes": true, 00:15:52.495 "zcopy": false, 00:15:52.495 "get_zone_info": false, 00:15:52.495 "zone_management": false, 00:15:52.495 "zone_append": false, 00:15:52.495 "compare": false, 00:15:52.495 "compare_and_write": false, 00:15:52.495 "abort": false, 00:15:52.495 "seek_hole": false, 00:15:52.495 "seek_data": false, 00:15:52.495 "copy": false, 00:15:52.495 "nvme_iov_md": false 00:15:52.495 }, 00:15:52.495 "driver_specific": { 00:15:52.495 "raid": { 00:15:52.495 "uuid": "b71bc529-8661-4b2e-a7ad-58ba28dfc051", 00:15:52.495 "strip_size_kb": 64, 00:15:52.495 "state": "online", 00:15:52.495 "raid_level": "raid5f", 00:15:52.495 "superblock": false, 00:15:52.495 "num_base_bdevs": 4, 00:15:52.495 "num_base_bdevs_discovered": 4, 00:15:52.495 "num_base_bdevs_operational": 4, 00:15:52.495 "base_bdevs_list": [ 00:15:52.495 { 00:15:52.495 "name": "BaseBdev1", 00:15:52.495 "uuid": "6908f815-54f5-4450-8f51-0469d77cce9e", 00:15:52.495 "is_configured": true, 00:15:52.495 "data_offset": 0, 00:15:52.495 "data_size": 65536 00:15:52.495 }, 00:15:52.495 { 00:15:52.495 "name": "BaseBdev2", 00:15:52.495 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:52.495 "is_configured": true, 00:15:52.495 "data_offset": 0, 00:15:52.495 "data_size": 65536 00:15:52.495 }, 00:15:52.495 { 00:15:52.495 "name": "BaseBdev3", 00:15:52.495 "uuid": "1741cb94-f48d-4295-909a-26a6f899dbe6", 00:15:52.495 "is_configured": true, 00:15:52.495 "data_offset": 0, 00:15:52.495 "data_size": 65536 00:15:52.495 }, 00:15:52.495 { 00:15:52.495 "name": "BaseBdev4", 00:15:52.495 "uuid": "caab33ca-2865-425b-9ae6-82a3d48a1335", 00:15:52.495 "is_configured": true, 00:15:52.495 "data_offset": 0, 00:15:52.495 "data_size": 65536 00:15:52.495 } 00:15:52.495 ] 00:15:52.495 } 00:15:52.495 } 00:15:52.495 }' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.495 BaseBdev2 00:15:52.495 BaseBdev3 00:15:52.495 BaseBdev4' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.754 [2024-09-28 08:53:30.558952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.754 "name": "Existed_Raid", 00:15:52.754 "uuid": "b71bc529-8661-4b2e-a7ad-58ba28dfc051", 00:15:52.754 "strip_size_kb": 64, 00:15:52.754 "state": "online", 00:15:52.754 "raid_level": "raid5f", 00:15:52.754 "superblock": false, 00:15:52.754 "num_base_bdevs": 4, 00:15:52.754 "num_base_bdevs_discovered": 3, 00:15:52.754 "num_base_bdevs_operational": 3, 00:15:52.754 "base_bdevs_list": [ 00:15:52.754 { 00:15:52.754 "name": null, 00:15:52.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.754 "is_configured": false, 00:15:52.754 "data_offset": 0, 00:15:52.754 "data_size": 65536 00:15:52.754 }, 00:15:52.754 { 00:15:52.754 "name": "BaseBdev2", 00:15:52.754 "uuid": "a93ae389-7cf0-47dc-8385-95207f7a27bd", 00:15:52.754 "is_configured": true, 00:15:52.754 "data_offset": 0, 00:15:52.754 "data_size": 65536 00:15:52.754 }, 00:15:52.754 { 00:15:52.754 "name": "BaseBdev3", 00:15:52.754 "uuid": "1741cb94-f48d-4295-909a-26a6f899dbe6", 00:15:52.754 "is_configured": true, 00:15:52.754 "data_offset": 0, 00:15:52.754 "data_size": 65536 00:15:52.754 }, 00:15:52.754 { 00:15:52.754 "name": "BaseBdev4", 00:15:52.754 "uuid": "caab33ca-2865-425b-9ae6-82a3d48a1335", 00:15:52.754 "is_configured": true, 00:15:52.754 "data_offset": 0, 00:15:52.754 "data_size": 65536 00:15:52.754 } 00:15:52.754 ] 00:15:52.754 }' 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.754 08:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 [2024-09-28 08:53:31.116570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.330 [2024-09-28 08:53:31.116753] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.330 [2024-09-28 08:53:31.216513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.330 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.330 [2024-09-28 08:53:31.276431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.589 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 [2024-09-28 08:53:31.432973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:53.590 [2024-09-28 08:53:31.433085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.590 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:53.849 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 BaseBdev2 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 [ 00:15:53.850 { 00:15:53.850 "name": "BaseBdev2", 00:15:53.850 "aliases": [ 00:15:53.850 "bcaf2d05-b7a0-4a86-9ad0-25b33debb553" 00:15:53.850 ], 00:15:53.850 "product_name": "Malloc disk", 00:15:53.850 "block_size": 512, 00:15:53.850 "num_blocks": 65536, 00:15:53.850 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:53.850 "assigned_rate_limits": { 00:15:53.850 "rw_ios_per_sec": 0, 00:15:53.850 "rw_mbytes_per_sec": 0, 00:15:53.850 "r_mbytes_per_sec": 0, 00:15:53.850 "w_mbytes_per_sec": 0 00:15:53.850 }, 00:15:53.850 "claimed": false, 00:15:53.850 "zoned": false, 00:15:53.850 "supported_io_types": { 00:15:53.850 "read": true, 00:15:53.850 "write": true, 00:15:53.850 "unmap": true, 00:15:53.850 "flush": true, 00:15:53.850 "reset": true, 00:15:53.850 "nvme_admin": false, 00:15:53.850 "nvme_io": false, 00:15:53.850 "nvme_io_md": false, 00:15:53.850 "write_zeroes": true, 00:15:53.850 "zcopy": true, 00:15:53.850 "get_zone_info": false, 00:15:53.850 "zone_management": false, 00:15:53.850 "zone_append": false, 00:15:53.850 "compare": false, 00:15:53.850 "compare_and_write": false, 00:15:53.850 "abort": true, 00:15:53.850 "seek_hole": false, 00:15:53.850 "seek_data": false, 00:15:53.850 "copy": true, 00:15:53.850 "nvme_iov_md": false 00:15:53.850 }, 00:15:53.850 "memory_domains": [ 00:15:53.850 { 00:15:53.850 "dma_device_id": "system", 00:15:53.850 "dma_device_type": 1 00:15:53.850 }, 00:15:53.850 { 00:15:53.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.850 "dma_device_type": 2 00:15:53.850 } 00:15:53.850 ], 00:15:53.850 "driver_specific": {} 00:15:53.850 } 00:15:53.850 ] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 BaseBdev3 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 [ 00:15:53.850 { 00:15:53.850 "name": "BaseBdev3", 00:15:53.850 "aliases": [ 00:15:53.850 "601cb610-fb8b-4e37-b1d3-6aff2aef65fb" 00:15:53.850 ], 00:15:53.850 "product_name": "Malloc disk", 00:15:53.850 "block_size": 512, 00:15:53.850 "num_blocks": 65536, 00:15:53.850 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:53.850 "assigned_rate_limits": { 00:15:53.850 "rw_ios_per_sec": 0, 00:15:53.850 "rw_mbytes_per_sec": 0, 00:15:53.850 "r_mbytes_per_sec": 0, 00:15:53.850 "w_mbytes_per_sec": 0 00:15:53.850 }, 00:15:53.850 "claimed": false, 00:15:53.850 "zoned": false, 00:15:53.850 "supported_io_types": { 00:15:53.850 "read": true, 00:15:53.850 "write": true, 00:15:53.850 "unmap": true, 00:15:53.850 "flush": true, 00:15:53.850 "reset": true, 00:15:53.850 "nvme_admin": false, 00:15:53.850 "nvme_io": false, 00:15:53.850 "nvme_io_md": false, 00:15:53.850 "write_zeroes": true, 00:15:53.850 "zcopy": true, 00:15:53.850 "get_zone_info": false, 00:15:53.850 "zone_management": false, 00:15:53.850 "zone_append": false, 00:15:53.850 "compare": false, 00:15:53.850 "compare_and_write": false, 00:15:53.850 "abort": true, 00:15:53.850 "seek_hole": false, 00:15:53.850 "seek_data": false, 00:15:53.850 "copy": true, 00:15:53.850 "nvme_iov_md": false 00:15:53.850 }, 00:15:53.850 "memory_domains": [ 00:15:53.850 { 00:15:53.850 "dma_device_id": "system", 00:15:53.850 "dma_device_type": 1 00:15:53.850 }, 00:15:53.850 { 00:15:53.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.850 "dma_device_type": 2 00:15:53.850 } 00:15:53.850 ], 00:15:53.850 "driver_specific": {} 00:15:53.850 } 00:15:53.850 ] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 BaseBdev4 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.850 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.850 [ 00:15:53.850 { 00:15:53.850 "name": "BaseBdev4", 00:15:53.850 "aliases": [ 00:15:53.850 "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4" 00:15:53.850 ], 00:15:53.850 "product_name": "Malloc disk", 00:15:53.850 "block_size": 512, 00:15:53.850 "num_blocks": 65536, 00:15:53.850 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:53.850 "assigned_rate_limits": { 00:15:53.850 "rw_ios_per_sec": 0, 00:15:53.850 "rw_mbytes_per_sec": 0, 00:15:53.850 "r_mbytes_per_sec": 0, 00:15:53.850 "w_mbytes_per_sec": 0 00:15:53.850 }, 00:15:53.850 "claimed": false, 00:15:53.850 "zoned": false, 00:15:53.850 "supported_io_types": { 00:15:53.850 "read": true, 00:15:53.850 "write": true, 00:15:53.850 "unmap": true, 00:15:53.850 "flush": true, 00:15:53.850 "reset": true, 00:15:53.850 "nvme_admin": false, 00:15:53.850 "nvme_io": false, 00:15:53.851 "nvme_io_md": false, 00:15:53.851 "write_zeroes": true, 00:15:53.851 "zcopy": true, 00:15:53.851 "get_zone_info": false, 00:15:53.851 "zone_management": false, 00:15:53.851 "zone_append": false, 00:15:53.851 "compare": false, 00:15:53.851 "compare_and_write": false, 00:15:53.851 "abort": true, 00:15:53.851 "seek_hole": false, 00:15:53.851 "seek_data": false, 00:15:53.851 "copy": true, 00:15:53.851 "nvme_iov_md": false 00:15:53.851 }, 00:15:53.851 "memory_domains": [ 00:15:53.851 { 00:15:53.851 "dma_device_id": "system", 00:15:53.851 "dma_device_type": 1 00:15:53.851 }, 00:15:53.851 { 00:15:53.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.851 "dma_device_type": 2 00:15:53.851 } 00:15:53.851 ], 00:15:53.851 "driver_specific": {} 00:15:53.851 } 00:15:53.851 ] 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.851 [2024-09-28 08:53:31.831482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.851 [2024-09-28 08:53:31.831602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.851 [2024-09-28 08:53:31.831659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.851 [2024-09-28 08:53:31.833654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:53.851 [2024-09-28 08:53:31.833761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.851 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.109 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.109 "name": "Existed_Raid", 00:15:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.109 "strip_size_kb": 64, 00:15:54.109 "state": "configuring", 00:15:54.109 "raid_level": "raid5f", 00:15:54.109 "superblock": false, 00:15:54.109 "num_base_bdevs": 4, 00:15:54.109 "num_base_bdevs_discovered": 3, 00:15:54.109 "num_base_bdevs_operational": 4, 00:15:54.109 "base_bdevs_list": [ 00:15:54.109 { 00:15:54.109 "name": "BaseBdev1", 00:15:54.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.109 "is_configured": false, 00:15:54.109 "data_offset": 0, 00:15:54.109 "data_size": 0 00:15:54.109 }, 00:15:54.109 { 00:15:54.109 "name": "BaseBdev2", 00:15:54.109 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:54.109 "is_configured": true, 00:15:54.109 "data_offset": 0, 00:15:54.109 "data_size": 65536 00:15:54.109 }, 00:15:54.109 { 00:15:54.109 "name": "BaseBdev3", 00:15:54.109 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:54.109 "is_configured": true, 00:15:54.109 "data_offset": 0, 00:15:54.109 "data_size": 65536 00:15:54.109 }, 00:15:54.109 { 00:15:54.109 "name": "BaseBdev4", 00:15:54.110 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:54.110 "is_configured": true, 00:15:54.110 "data_offset": 0, 00:15:54.110 "data_size": 65536 00:15:54.110 } 00:15:54.110 ] 00:15:54.110 }' 00:15:54.110 08:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.110 08:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.368 [2024-09-28 08:53:32.314869] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.368 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.627 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.627 "name": "Existed_Raid", 00:15:54.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.627 "strip_size_kb": 64, 00:15:54.627 "state": "configuring", 00:15:54.627 "raid_level": "raid5f", 00:15:54.627 "superblock": false, 00:15:54.628 "num_base_bdevs": 4, 00:15:54.628 "num_base_bdevs_discovered": 2, 00:15:54.628 "num_base_bdevs_operational": 4, 00:15:54.628 "base_bdevs_list": [ 00:15:54.628 { 00:15:54.628 "name": "BaseBdev1", 00:15:54.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.628 "is_configured": false, 00:15:54.628 "data_offset": 0, 00:15:54.628 "data_size": 0 00:15:54.628 }, 00:15:54.628 { 00:15:54.628 "name": null, 00:15:54.628 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:54.628 "is_configured": false, 00:15:54.628 "data_offset": 0, 00:15:54.628 "data_size": 65536 00:15:54.628 }, 00:15:54.628 { 00:15:54.628 "name": "BaseBdev3", 00:15:54.628 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:54.628 "is_configured": true, 00:15:54.628 "data_offset": 0, 00:15:54.628 "data_size": 65536 00:15:54.628 }, 00:15:54.628 { 00:15:54.628 "name": "BaseBdev4", 00:15:54.628 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:54.628 "is_configured": true, 00:15:54.628 "data_offset": 0, 00:15:54.628 "data_size": 65536 00:15:54.628 } 00:15:54.628 ] 00:15:54.628 }' 00:15:54.628 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.628 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.888 [2024-09-28 08:53:32.867250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.888 BaseBdev1 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.888 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 [ 00:15:55.148 { 00:15:55.148 "name": "BaseBdev1", 00:15:55.148 "aliases": [ 00:15:55.148 "eadb8df0-396b-43f6-9e0a-63bba3987a52" 00:15:55.148 ], 00:15:55.148 "product_name": "Malloc disk", 00:15:55.148 "block_size": 512, 00:15:55.148 "num_blocks": 65536, 00:15:55.148 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:55.148 "assigned_rate_limits": { 00:15:55.148 "rw_ios_per_sec": 0, 00:15:55.148 "rw_mbytes_per_sec": 0, 00:15:55.148 "r_mbytes_per_sec": 0, 00:15:55.148 "w_mbytes_per_sec": 0 00:15:55.148 }, 00:15:55.148 "claimed": true, 00:15:55.148 "claim_type": "exclusive_write", 00:15:55.148 "zoned": false, 00:15:55.148 "supported_io_types": { 00:15:55.148 "read": true, 00:15:55.148 "write": true, 00:15:55.148 "unmap": true, 00:15:55.148 "flush": true, 00:15:55.148 "reset": true, 00:15:55.148 "nvme_admin": false, 00:15:55.148 "nvme_io": false, 00:15:55.148 "nvme_io_md": false, 00:15:55.148 "write_zeroes": true, 00:15:55.148 "zcopy": true, 00:15:55.148 "get_zone_info": false, 00:15:55.148 "zone_management": false, 00:15:55.148 "zone_append": false, 00:15:55.148 "compare": false, 00:15:55.148 "compare_and_write": false, 00:15:55.148 "abort": true, 00:15:55.148 "seek_hole": false, 00:15:55.148 "seek_data": false, 00:15:55.148 "copy": true, 00:15:55.148 "nvme_iov_md": false 00:15:55.148 }, 00:15:55.148 "memory_domains": [ 00:15:55.148 { 00:15:55.148 "dma_device_id": "system", 00:15:55.148 "dma_device_type": 1 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.148 "dma_device_type": 2 00:15:55.148 } 00:15:55.148 ], 00:15:55.148 "driver_specific": {} 00:15:55.148 } 00:15:55.148 ] 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.148 "name": "Existed_Raid", 00:15:55.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.148 "strip_size_kb": 64, 00:15:55.148 "state": "configuring", 00:15:55.148 "raid_level": "raid5f", 00:15:55.148 "superblock": false, 00:15:55.148 "num_base_bdevs": 4, 00:15:55.148 "num_base_bdevs_discovered": 3, 00:15:55.148 "num_base_bdevs_operational": 4, 00:15:55.148 "base_bdevs_list": [ 00:15:55.148 { 00:15:55.148 "name": "BaseBdev1", 00:15:55.148 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:55.148 "is_configured": true, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 65536 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": null, 00:15:55.148 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:55.148 "is_configured": false, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 65536 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": "BaseBdev3", 00:15:55.148 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:55.148 "is_configured": true, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 65536 00:15:55.148 }, 00:15:55.148 { 00:15:55.148 "name": "BaseBdev4", 00:15:55.148 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:55.148 "is_configured": true, 00:15:55.148 "data_offset": 0, 00:15:55.148 "data_size": 65536 00:15:55.148 } 00:15:55.148 ] 00:15:55.148 }' 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.148 08:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.408 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.408 [2024-09-28 08:53:33.398380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.668 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.668 "name": "Existed_Raid", 00:15:55.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.668 "strip_size_kb": 64, 00:15:55.668 "state": "configuring", 00:15:55.668 "raid_level": "raid5f", 00:15:55.668 "superblock": false, 00:15:55.668 "num_base_bdevs": 4, 00:15:55.668 "num_base_bdevs_discovered": 2, 00:15:55.668 "num_base_bdevs_operational": 4, 00:15:55.668 "base_bdevs_list": [ 00:15:55.668 { 00:15:55.668 "name": "BaseBdev1", 00:15:55.668 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:55.668 "is_configured": true, 00:15:55.668 "data_offset": 0, 00:15:55.668 "data_size": 65536 00:15:55.668 }, 00:15:55.668 { 00:15:55.668 "name": null, 00:15:55.668 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:55.668 "is_configured": false, 00:15:55.668 "data_offset": 0, 00:15:55.668 "data_size": 65536 00:15:55.668 }, 00:15:55.668 { 00:15:55.668 "name": null, 00:15:55.668 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:55.668 "is_configured": false, 00:15:55.668 "data_offset": 0, 00:15:55.668 "data_size": 65536 00:15:55.668 }, 00:15:55.668 { 00:15:55.668 "name": "BaseBdev4", 00:15:55.668 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:55.668 "is_configured": true, 00:15:55.668 "data_offset": 0, 00:15:55.668 "data_size": 65536 00:15:55.668 } 00:15:55.669 ] 00:15:55.669 }' 00:15:55.669 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.669 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.928 [2024-09-28 08:53:33.873582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.928 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.188 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.188 "name": "Existed_Raid", 00:15:56.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.188 "strip_size_kb": 64, 00:15:56.188 "state": "configuring", 00:15:56.188 "raid_level": "raid5f", 00:15:56.188 "superblock": false, 00:15:56.189 "num_base_bdevs": 4, 00:15:56.189 "num_base_bdevs_discovered": 3, 00:15:56.189 "num_base_bdevs_operational": 4, 00:15:56.189 "base_bdevs_list": [ 00:15:56.189 { 00:15:56.189 "name": "BaseBdev1", 00:15:56.189 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:56.189 "is_configured": true, 00:15:56.189 "data_offset": 0, 00:15:56.189 "data_size": 65536 00:15:56.189 }, 00:15:56.189 { 00:15:56.189 "name": null, 00:15:56.189 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:56.189 "is_configured": false, 00:15:56.189 "data_offset": 0, 00:15:56.189 "data_size": 65536 00:15:56.189 }, 00:15:56.189 { 00:15:56.189 "name": "BaseBdev3", 00:15:56.189 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:56.189 "is_configured": true, 00:15:56.189 "data_offset": 0, 00:15:56.189 "data_size": 65536 00:15:56.189 }, 00:15:56.189 { 00:15:56.189 "name": "BaseBdev4", 00:15:56.189 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:56.189 "is_configured": true, 00:15:56.189 "data_offset": 0, 00:15:56.189 "data_size": 65536 00:15:56.189 } 00:15:56.189 ] 00:15:56.189 }' 00:15:56.189 08:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.189 08:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.449 [2024-09-28 08:53:34.336781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.449 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.710 "name": "Existed_Raid", 00:15:56.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.710 "strip_size_kb": 64, 00:15:56.710 "state": "configuring", 00:15:56.710 "raid_level": "raid5f", 00:15:56.710 "superblock": false, 00:15:56.710 "num_base_bdevs": 4, 00:15:56.710 "num_base_bdevs_discovered": 2, 00:15:56.710 "num_base_bdevs_operational": 4, 00:15:56.710 "base_bdevs_list": [ 00:15:56.710 { 00:15:56.710 "name": null, 00:15:56.710 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:56.710 "is_configured": false, 00:15:56.710 "data_offset": 0, 00:15:56.710 "data_size": 65536 00:15:56.710 }, 00:15:56.710 { 00:15:56.710 "name": null, 00:15:56.710 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:56.710 "is_configured": false, 00:15:56.710 "data_offset": 0, 00:15:56.710 "data_size": 65536 00:15:56.710 }, 00:15:56.710 { 00:15:56.710 "name": "BaseBdev3", 00:15:56.710 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:56.710 "is_configured": true, 00:15:56.710 "data_offset": 0, 00:15:56.710 "data_size": 65536 00:15:56.710 }, 00:15:56.710 { 00:15:56.710 "name": "BaseBdev4", 00:15:56.710 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:56.710 "is_configured": true, 00:15:56.710 "data_offset": 0, 00:15:56.710 "data_size": 65536 00:15:56.710 } 00:15:56.710 ] 00:15:56.710 }' 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.710 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.972 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.973 [2024-09-28 08:53:34.950154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.973 08:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.232 08:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.232 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.232 "name": "Existed_Raid", 00:15:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.232 "strip_size_kb": 64, 00:15:57.232 "state": "configuring", 00:15:57.232 "raid_level": "raid5f", 00:15:57.232 "superblock": false, 00:15:57.232 "num_base_bdevs": 4, 00:15:57.232 "num_base_bdevs_discovered": 3, 00:15:57.232 "num_base_bdevs_operational": 4, 00:15:57.232 "base_bdevs_list": [ 00:15:57.232 { 00:15:57.232 "name": null, 00:15:57.232 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:57.232 "is_configured": false, 00:15:57.232 "data_offset": 0, 00:15:57.232 "data_size": 65536 00:15:57.232 }, 00:15:57.232 { 00:15:57.232 "name": "BaseBdev2", 00:15:57.232 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:57.232 "is_configured": true, 00:15:57.232 "data_offset": 0, 00:15:57.232 "data_size": 65536 00:15:57.232 }, 00:15:57.232 { 00:15:57.232 "name": "BaseBdev3", 00:15:57.232 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:57.232 "is_configured": true, 00:15:57.232 "data_offset": 0, 00:15:57.232 "data_size": 65536 00:15:57.232 }, 00:15:57.232 { 00:15:57.232 "name": "BaseBdev4", 00:15:57.232 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:57.232 "is_configured": true, 00:15:57.232 "data_offset": 0, 00:15:57.232 "data_size": 65536 00:15:57.232 } 00:15:57.232 ] 00:15:57.232 }' 00:15:57.232 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.232 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.492 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.492 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.492 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.492 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.492 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eadb8df0-396b-43f6-9e0a-63bba3987a52 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.752 [2024-09-28 08:53:35.577444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:57.752 [2024-09-28 08:53:35.577511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:57.752 [2024-09-28 08:53:35.577520] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:57.752 [2024-09-28 08:53:35.577836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:57.752 [2024-09-28 08:53:35.584728] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:57.752 [2024-09-28 08:53:35.584753] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:57.752 [2024-09-28 08:53:35.585009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.752 NewBaseBdev 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.752 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.752 [ 00:15:57.752 { 00:15:57.752 "name": "NewBaseBdev", 00:15:57.752 "aliases": [ 00:15:57.752 "eadb8df0-396b-43f6-9e0a-63bba3987a52" 00:15:57.752 ], 00:15:57.752 "product_name": "Malloc disk", 00:15:57.752 "block_size": 512, 00:15:57.752 "num_blocks": 65536, 00:15:57.753 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:57.753 "assigned_rate_limits": { 00:15:57.753 "rw_ios_per_sec": 0, 00:15:57.753 "rw_mbytes_per_sec": 0, 00:15:57.753 "r_mbytes_per_sec": 0, 00:15:57.753 "w_mbytes_per_sec": 0 00:15:57.753 }, 00:15:57.753 "claimed": true, 00:15:57.753 "claim_type": "exclusive_write", 00:15:57.753 "zoned": false, 00:15:57.753 "supported_io_types": { 00:15:57.753 "read": true, 00:15:57.753 "write": true, 00:15:57.753 "unmap": true, 00:15:57.753 "flush": true, 00:15:57.753 "reset": true, 00:15:57.753 "nvme_admin": false, 00:15:57.753 "nvme_io": false, 00:15:57.753 "nvme_io_md": false, 00:15:57.753 "write_zeroes": true, 00:15:57.753 "zcopy": true, 00:15:57.753 "get_zone_info": false, 00:15:57.753 "zone_management": false, 00:15:57.753 "zone_append": false, 00:15:57.753 "compare": false, 00:15:57.753 "compare_and_write": false, 00:15:57.753 "abort": true, 00:15:57.753 "seek_hole": false, 00:15:57.753 "seek_data": false, 00:15:57.753 "copy": true, 00:15:57.753 "nvme_iov_md": false 00:15:57.753 }, 00:15:57.753 "memory_domains": [ 00:15:57.753 { 00:15:57.753 "dma_device_id": "system", 00:15:57.753 "dma_device_type": 1 00:15:57.753 }, 00:15:57.753 { 00:15:57.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.753 "dma_device_type": 2 00:15:57.753 } 00:15:57.753 ], 00:15:57.753 "driver_specific": {} 00:15:57.753 } 00:15:57.753 ] 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.753 "name": "Existed_Raid", 00:15:57.753 "uuid": "f0160074-26e0-485f-bb1c-b371164afbe2", 00:15:57.753 "strip_size_kb": 64, 00:15:57.753 "state": "online", 00:15:57.753 "raid_level": "raid5f", 00:15:57.753 "superblock": false, 00:15:57.753 "num_base_bdevs": 4, 00:15:57.753 "num_base_bdevs_discovered": 4, 00:15:57.753 "num_base_bdevs_operational": 4, 00:15:57.753 "base_bdevs_list": [ 00:15:57.753 { 00:15:57.753 "name": "NewBaseBdev", 00:15:57.753 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:57.753 "is_configured": true, 00:15:57.753 "data_offset": 0, 00:15:57.753 "data_size": 65536 00:15:57.753 }, 00:15:57.753 { 00:15:57.753 "name": "BaseBdev2", 00:15:57.753 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:57.753 "is_configured": true, 00:15:57.753 "data_offset": 0, 00:15:57.753 "data_size": 65536 00:15:57.753 }, 00:15:57.753 { 00:15:57.753 "name": "BaseBdev3", 00:15:57.753 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:57.753 "is_configured": true, 00:15:57.753 "data_offset": 0, 00:15:57.753 "data_size": 65536 00:15:57.753 }, 00:15:57.753 { 00:15:57.753 "name": "BaseBdev4", 00:15:57.753 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:57.753 "is_configured": true, 00:15:57.753 "data_offset": 0, 00:15:57.753 "data_size": 65536 00:15:57.753 } 00:15:57.753 ] 00:15:57.753 }' 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.753 08:53:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.322 [2024-09-28 08:53:36.089163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.322 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.322 "name": "Existed_Raid", 00:15:58.322 "aliases": [ 00:15:58.322 "f0160074-26e0-485f-bb1c-b371164afbe2" 00:15:58.322 ], 00:15:58.322 "product_name": "Raid Volume", 00:15:58.322 "block_size": 512, 00:15:58.322 "num_blocks": 196608, 00:15:58.322 "uuid": "f0160074-26e0-485f-bb1c-b371164afbe2", 00:15:58.322 "assigned_rate_limits": { 00:15:58.322 "rw_ios_per_sec": 0, 00:15:58.322 "rw_mbytes_per_sec": 0, 00:15:58.322 "r_mbytes_per_sec": 0, 00:15:58.322 "w_mbytes_per_sec": 0 00:15:58.322 }, 00:15:58.322 "claimed": false, 00:15:58.322 "zoned": false, 00:15:58.322 "supported_io_types": { 00:15:58.322 "read": true, 00:15:58.322 "write": true, 00:15:58.322 "unmap": false, 00:15:58.322 "flush": false, 00:15:58.322 "reset": true, 00:15:58.322 "nvme_admin": false, 00:15:58.322 "nvme_io": false, 00:15:58.322 "nvme_io_md": false, 00:15:58.322 "write_zeroes": true, 00:15:58.322 "zcopy": false, 00:15:58.322 "get_zone_info": false, 00:15:58.322 "zone_management": false, 00:15:58.322 "zone_append": false, 00:15:58.322 "compare": false, 00:15:58.322 "compare_and_write": false, 00:15:58.322 "abort": false, 00:15:58.322 "seek_hole": false, 00:15:58.322 "seek_data": false, 00:15:58.322 "copy": false, 00:15:58.322 "nvme_iov_md": false 00:15:58.322 }, 00:15:58.322 "driver_specific": { 00:15:58.322 "raid": { 00:15:58.322 "uuid": "f0160074-26e0-485f-bb1c-b371164afbe2", 00:15:58.322 "strip_size_kb": 64, 00:15:58.322 "state": "online", 00:15:58.322 "raid_level": "raid5f", 00:15:58.322 "superblock": false, 00:15:58.322 "num_base_bdevs": 4, 00:15:58.322 "num_base_bdevs_discovered": 4, 00:15:58.322 "num_base_bdevs_operational": 4, 00:15:58.322 "base_bdevs_list": [ 00:15:58.322 { 00:15:58.322 "name": "NewBaseBdev", 00:15:58.322 "uuid": "eadb8df0-396b-43f6-9e0a-63bba3987a52", 00:15:58.322 "is_configured": true, 00:15:58.322 "data_offset": 0, 00:15:58.322 "data_size": 65536 00:15:58.322 }, 00:15:58.322 { 00:15:58.322 "name": "BaseBdev2", 00:15:58.322 "uuid": "bcaf2d05-b7a0-4a86-9ad0-25b33debb553", 00:15:58.322 "is_configured": true, 00:15:58.323 "data_offset": 0, 00:15:58.323 "data_size": 65536 00:15:58.323 }, 00:15:58.323 { 00:15:58.323 "name": "BaseBdev3", 00:15:58.323 "uuid": "601cb610-fb8b-4e37-b1d3-6aff2aef65fb", 00:15:58.323 "is_configured": true, 00:15:58.323 "data_offset": 0, 00:15:58.323 "data_size": 65536 00:15:58.323 }, 00:15:58.323 { 00:15:58.323 "name": "BaseBdev4", 00:15:58.323 "uuid": "35cc077f-62bf-42d2-b4ae-7fbcce7c10b4", 00:15:58.323 "is_configured": true, 00:15:58.323 "data_offset": 0, 00:15:58.323 "data_size": 65536 00:15:58.323 } 00:15:58.323 ] 00:15:58.323 } 00:15:58.323 } 00:15:58.323 }' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:58.323 BaseBdev2 00:15:58.323 BaseBdev3 00:15:58.323 BaseBdev4' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.323 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 [2024-09-28 08:53:36.424409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.583 [2024-09-28 08:53:36.424476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.583 [2024-09-28 08:53:36.424550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.583 [2024-09-28 08:53:36.424882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.583 [2024-09-28 08:53:36.424894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82775 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82775 ']' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82775 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82775 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82775' 00:15:58.583 killing process with pid 82775 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82775 00:15:58.583 [2024-09-28 08:53:36.471892] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.583 08:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82775 00:15:59.153 [2024-09-28 08:53:36.878934] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:00.535 00:16:00.535 real 0m12.017s 00:16:00.535 user 0m18.699s 00:16:00.535 sys 0m2.341s 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 ************************************ 00:16:00.535 END TEST raid5f_state_function_test 00:16:00.535 ************************************ 00:16:00.535 08:53:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:00.535 08:53:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:00.535 08:53:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.535 08:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 ************************************ 00:16:00.535 START TEST raid5f_state_function_test_sb 00:16:00.535 ************************************ 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:00.535 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83445 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83445' 00:16:00.536 Process raid pid: 83445 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83445 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83445 ']' 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.536 08:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.536 [2024-09-28 08:53:38.393864] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:00.536 [2024-09-28 08:53:38.394051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.795 [2024-09-28 08:53:38.558755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.055 [2024-09-28 08:53:38.810808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.315 [2024-09-28 08:53:39.050056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.315 [2024-09-28 08:53:39.050108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.315 [2024-09-28 08:53:39.210321] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.315 [2024-09-28 08:53:39.210383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.315 [2024-09-28 08:53:39.210393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.315 [2024-09-28 08:53:39.210402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.315 [2024-09-28 08:53:39.210408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.315 [2024-09-28 08:53:39.210418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.315 [2024-09-28 08:53:39.210423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.315 [2024-09-28 08:53:39.210433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.315 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.315 "name": "Existed_Raid", 00:16:01.315 "uuid": "111d14e4-e534-41eb-8909-3d39ac92b62b", 00:16:01.315 "strip_size_kb": 64, 00:16:01.315 "state": "configuring", 00:16:01.315 "raid_level": "raid5f", 00:16:01.315 "superblock": true, 00:16:01.315 "num_base_bdevs": 4, 00:16:01.315 "num_base_bdevs_discovered": 0, 00:16:01.315 "num_base_bdevs_operational": 4, 00:16:01.315 "base_bdevs_list": [ 00:16:01.315 { 00:16:01.315 "name": "BaseBdev1", 00:16:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.315 "is_configured": false, 00:16:01.315 "data_offset": 0, 00:16:01.315 "data_size": 0 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "name": "BaseBdev2", 00:16:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.315 "is_configured": false, 00:16:01.315 "data_offset": 0, 00:16:01.315 "data_size": 0 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "name": "BaseBdev3", 00:16:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.315 "is_configured": false, 00:16:01.315 "data_offset": 0, 00:16:01.315 "data_size": 0 00:16:01.315 }, 00:16:01.315 { 00:16:01.315 "name": "BaseBdev4", 00:16:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.315 "is_configured": false, 00:16:01.315 "data_offset": 0, 00:16:01.316 "data_size": 0 00:16:01.316 } 00:16:01.316 ] 00:16:01.316 }' 00:16:01.316 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.316 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-28 08:53:39.641462] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.886 [2024-09-28 08:53:39.641559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-28 08:53:39.653475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.886 [2024-09-28 08:53:39.653549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.886 [2024-09-28 08:53:39.653572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.886 [2024-09-28 08:53:39.653594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.886 [2024-09-28 08:53:39.653610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.886 [2024-09-28 08:53:39.653630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.886 [2024-09-28 08:53:39.653646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.886 [2024-09-28 08:53:39.653698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-28 08:53:39.740299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.886 BaseBdev1 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [ 00:16:01.886 { 00:16:01.887 "name": "BaseBdev1", 00:16:01.887 "aliases": [ 00:16:01.887 "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac" 00:16:01.887 ], 00:16:01.887 "product_name": "Malloc disk", 00:16:01.887 "block_size": 512, 00:16:01.887 "num_blocks": 65536, 00:16:01.887 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:01.887 "assigned_rate_limits": { 00:16:01.887 "rw_ios_per_sec": 0, 00:16:01.887 "rw_mbytes_per_sec": 0, 00:16:01.887 "r_mbytes_per_sec": 0, 00:16:01.887 "w_mbytes_per_sec": 0 00:16:01.887 }, 00:16:01.887 "claimed": true, 00:16:01.887 "claim_type": "exclusive_write", 00:16:01.887 "zoned": false, 00:16:01.887 "supported_io_types": { 00:16:01.887 "read": true, 00:16:01.887 "write": true, 00:16:01.887 "unmap": true, 00:16:01.887 "flush": true, 00:16:01.887 "reset": true, 00:16:01.887 "nvme_admin": false, 00:16:01.887 "nvme_io": false, 00:16:01.887 "nvme_io_md": false, 00:16:01.887 "write_zeroes": true, 00:16:01.887 "zcopy": true, 00:16:01.887 "get_zone_info": false, 00:16:01.887 "zone_management": false, 00:16:01.887 "zone_append": false, 00:16:01.887 "compare": false, 00:16:01.887 "compare_and_write": false, 00:16:01.887 "abort": true, 00:16:01.887 "seek_hole": false, 00:16:01.887 "seek_data": false, 00:16:01.887 "copy": true, 00:16:01.887 "nvme_iov_md": false 00:16:01.887 }, 00:16:01.887 "memory_domains": [ 00:16:01.887 { 00:16:01.887 "dma_device_id": "system", 00:16:01.887 "dma_device_type": 1 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.887 "dma_device_type": 2 00:16:01.887 } 00:16:01.887 ], 00:16:01.887 "driver_specific": {} 00:16:01.887 } 00:16:01.887 ] 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.887 "name": "Existed_Raid", 00:16:01.887 "uuid": "39d570fb-c701-434b-a961-901a01253b0a", 00:16:01.887 "strip_size_kb": 64, 00:16:01.887 "state": "configuring", 00:16:01.887 "raid_level": "raid5f", 00:16:01.887 "superblock": true, 00:16:01.887 "num_base_bdevs": 4, 00:16:01.887 "num_base_bdevs_discovered": 1, 00:16:01.887 "num_base_bdevs_operational": 4, 00:16:01.887 "base_bdevs_list": [ 00:16:01.887 { 00:16:01.887 "name": "BaseBdev1", 00:16:01.887 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:01.887 "is_configured": true, 00:16:01.887 "data_offset": 2048, 00:16:01.887 "data_size": 63488 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev2", 00:16:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 0 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev3", 00:16:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 0 00:16:01.887 }, 00:16:01.887 { 00:16:01.887 "name": "BaseBdev4", 00:16:01.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.887 "is_configured": false, 00:16:01.887 "data_offset": 0, 00:16:01.887 "data_size": 0 00:16:01.887 } 00:16:01.887 ] 00:16:01.887 }' 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.887 08:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.457 [2024-09-28 08:53:40.227459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.457 [2024-09-28 08:53:40.227541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.457 [2024-09-28 08:53:40.239493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.457 [2024-09-28 08:53:40.241550] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.457 [2024-09-28 08:53:40.241626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.457 [2024-09-28 08:53:40.241660] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.457 [2024-09-28 08:53:40.241684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.457 [2024-09-28 08:53:40.241701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.457 [2024-09-28 08:53:40.241720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.457 "name": "Existed_Raid", 00:16:02.457 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:02.457 "strip_size_kb": 64, 00:16:02.457 "state": "configuring", 00:16:02.457 "raid_level": "raid5f", 00:16:02.457 "superblock": true, 00:16:02.457 "num_base_bdevs": 4, 00:16:02.457 "num_base_bdevs_discovered": 1, 00:16:02.457 "num_base_bdevs_operational": 4, 00:16:02.457 "base_bdevs_list": [ 00:16:02.457 { 00:16:02.457 "name": "BaseBdev1", 00:16:02.457 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:02.457 "is_configured": true, 00:16:02.457 "data_offset": 2048, 00:16:02.457 "data_size": 63488 00:16:02.457 }, 00:16:02.457 { 00:16:02.457 "name": "BaseBdev2", 00:16:02.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.457 "is_configured": false, 00:16:02.457 "data_offset": 0, 00:16:02.457 "data_size": 0 00:16:02.457 }, 00:16:02.457 { 00:16:02.457 "name": "BaseBdev3", 00:16:02.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.457 "is_configured": false, 00:16:02.457 "data_offset": 0, 00:16:02.457 "data_size": 0 00:16:02.457 }, 00:16:02.457 { 00:16:02.457 "name": "BaseBdev4", 00:16:02.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.457 "is_configured": false, 00:16:02.457 "data_offset": 0, 00:16:02.457 "data_size": 0 00:16:02.457 } 00:16:02.457 ] 00:16:02.457 }' 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.457 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.717 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.717 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.717 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.977 [2024-09-28 08:53:40.746641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.977 BaseBdev2 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.977 [ 00:16:02.977 { 00:16:02.977 "name": "BaseBdev2", 00:16:02.977 "aliases": [ 00:16:02.977 "4cd3d8da-ddde-40f5-9866-ec90a0a089ce" 00:16:02.977 ], 00:16:02.977 "product_name": "Malloc disk", 00:16:02.977 "block_size": 512, 00:16:02.977 "num_blocks": 65536, 00:16:02.977 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:02.977 "assigned_rate_limits": { 00:16:02.977 "rw_ios_per_sec": 0, 00:16:02.977 "rw_mbytes_per_sec": 0, 00:16:02.977 "r_mbytes_per_sec": 0, 00:16:02.977 "w_mbytes_per_sec": 0 00:16:02.977 }, 00:16:02.977 "claimed": true, 00:16:02.977 "claim_type": "exclusive_write", 00:16:02.977 "zoned": false, 00:16:02.977 "supported_io_types": { 00:16:02.977 "read": true, 00:16:02.977 "write": true, 00:16:02.977 "unmap": true, 00:16:02.977 "flush": true, 00:16:02.977 "reset": true, 00:16:02.977 "nvme_admin": false, 00:16:02.977 "nvme_io": false, 00:16:02.977 "nvme_io_md": false, 00:16:02.977 "write_zeroes": true, 00:16:02.977 "zcopy": true, 00:16:02.977 "get_zone_info": false, 00:16:02.977 "zone_management": false, 00:16:02.977 "zone_append": false, 00:16:02.977 "compare": false, 00:16:02.977 "compare_and_write": false, 00:16:02.977 "abort": true, 00:16:02.977 "seek_hole": false, 00:16:02.977 "seek_data": false, 00:16:02.977 "copy": true, 00:16:02.977 "nvme_iov_md": false 00:16:02.977 }, 00:16:02.977 "memory_domains": [ 00:16:02.977 { 00:16:02.977 "dma_device_id": "system", 00:16:02.977 "dma_device_type": 1 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.977 "dma_device_type": 2 00:16:02.977 } 00:16:02.977 ], 00:16:02.977 "driver_specific": {} 00:16:02.977 } 00:16:02.977 ] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.977 "name": "Existed_Raid", 00:16:02.977 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:02.977 "strip_size_kb": 64, 00:16:02.977 "state": "configuring", 00:16:02.977 "raid_level": "raid5f", 00:16:02.977 "superblock": true, 00:16:02.977 "num_base_bdevs": 4, 00:16:02.977 "num_base_bdevs_discovered": 2, 00:16:02.977 "num_base_bdevs_operational": 4, 00:16:02.977 "base_bdevs_list": [ 00:16:02.977 { 00:16:02.977 "name": "BaseBdev1", 00:16:02.977 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:02.977 "is_configured": true, 00:16:02.977 "data_offset": 2048, 00:16:02.977 "data_size": 63488 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "name": "BaseBdev2", 00:16:02.977 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:02.977 "is_configured": true, 00:16:02.977 "data_offset": 2048, 00:16:02.977 "data_size": 63488 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "name": "BaseBdev3", 00:16:02.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.977 "is_configured": false, 00:16:02.977 "data_offset": 0, 00:16:02.977 "data_size": 0 00:16:02.977 }, 00:16:02.977 { 00:16:02.977 "name": "BaseBdev4", 00:16:02.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.977 "is_configured": false, 00:16:02.977 "data_offset": 0, 00:16:02.977 "data_size": 0 00:16:02.977 } 00:16:02.977 ] 00:16:02.977 }' 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.977 08:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.237 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:03.237 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.237 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.497 [2024-09-28 08:53:41.275750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.497 BaseBdev3 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.497 [ 00:16:03.497 { 00:16:03.497 "name": "BaseBdev3", 00:16:03.497 "aliases": [ 00:16:03.497 "2cb592c0-e4ea-4997-bc8f-f06e729884dd" 00:16:03.497 ], 00:16:03.497 "product_name": "Malloc disk", 00:16:03.497 "block_size": 512, 00:16:03.497 "num_blocks": 65536, 00:16:03.497 "uuid": "2cb592c0-e4ea-4997-bc8f-f06e729884dd", 00:16:03.497 "assigned_rate_limits": { 00:16:03.497 "rw_ios_per_sec": 0, 00:16:03.497 "rw_mbytes_per_sec": 0, 00:16:03.497 "r_mbytes_per_sec": 0, 00:16:03.497 "w_mbytes_per_sec": 0 00:16:03.497 }, 00:16:03.497 "claimed": true, 00:16:03.497 "claim_type": "exclusive_write", 00:16:03.497 "zoned": false, 00:16:03.497 "supported_io_types": { 00:16:03.497 "read": true, 00:16:03.497 "write": true, 00:16:03.497 "unmap": true, 00:16:03.497 "flush": true, 00:16:03.497 "reset": true, 00:16:03.497 "nvme_admin": false, 00:16:03.497 "nvme_io": false, 00:16:03.497 "nvme_io_md": false, 00:16:03.497 "write_zeroes": true, 00:16:03.497 "zcopy": true, 00:16:03.497 "get_zone_info": false, 00:16:03.497 "zone_management": false, 00:16:03.497 "zone_append": false, 00:16:03.497 "compare": false, 00:16:03.497 "compare_and_write": false, 00:16:03.497 "abort": true, 00:16:03.497 "seek_hole": false, 00:16:03.497 "seek_data": false, 00:16:03.497 "copy": true, 00:16:03.497 "nvme_iov_md": false 00:16:03.497 }, 00:16:03.497 "memory_domains": [ 00:16:03.497 { 00:16:03.497 "dma_device_id": "system", 00:16:03.497 "dma_device_type": 1 00:16:03.497 }, 00:16:03.497 { 00:16:03.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.497 "dma_device_type": 2 00:16:03.497 } 00:16:03.497 ], 00:16:03.497 "driver_specific": {} 00:16:03.497 } 00:16:03.497 ] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.497 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.497 "name": "Existed_Raid", 00:16:03.497 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:03.497 "strip_size_kb": 64, 00:16:03.497 "state": "configuring", 00:16:03.497 "raid_level": "raid5f", 00:16:03.497 "superblock": true, 00:16:03.497 "num_base_bdevs": 4, 00:16:03.497 "num_base_bdevs_discovered": 3, 00:16:03.497 "num_base_bdevs_operational": 4, 00:16:03.497 "base_bdevs_list": [ 00:16:03.497 { 00:16:03.497 "name": "BaseBdev1", 00:16:03.497 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:03.497 "is_configured": true, 00:16:03.497 "data_offset": 2048, 00:16:03.497 "data_size": 63488 00:16:03.497 }, 00:16:03.497 { 00:16:03.497 "name": "BaseBdev2", 00:16:03.497 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:03.497 "is_configured": true, 00:16:03.497 "data_offset": 2048, 00:16:03.497 "data_size": 63488 00:16:03.497 }, 00:16:03.497 { 00:16:03.497 "name": "BaseBdev3", 00:16:03.497 "uuid": "2cb592c0-e4ea-4997-bc8f-f06e729884dd", 00:16:03.497 "is_configured": true, 00:16:03.497 "data_offset": 2048, 00:16:03.498 "data_size": 63488 00:16:03.498 }, 00:16:03.498 { 00:16:03.498 "name": "BaseBdev4", 00:16:03.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.498 "is_configured": false, 00:16:03.498 "data_offset": 0, 00:16:03.498 "data_size": 0 00:16:03.498 } 00:16:03.498 ] 00:16:03.498 }' 00:16:03.498 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.498 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.069 [2024-09-28 08:53:41.818372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.069 [2024-09-28 08:53:41.818669] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:04.069 [2024-09-28 08:53:41.818707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.069 BaseBdev4 00:16:04.069 [2024-09-28 08:53:41.819007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.069 [2024-09-28 08:53:41.825979] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:04.069 [2024-09-28 08:53:41.826048] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:04.069 [2024-09-28 08:53:41.826354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.069 [ 00:16:04.069 { 00:16:04.069 "name": "BaseBdev4", 00:16:04.069 "aliases": [ 00:16:04.069 "499e5177-c041-4348-a910-13906759f891" 00:16:04.069 ], 00:16:04.069 "product_name": "Malloc disk", 00:16:04.069 "block_size": 512, 00:16:04.069 "num_blocks": 65536, 00:16:04.069 "uuid": "499e5177-c041-4348-a910-13906759f891", 00:16:04.069 "assigned_rate_limits": { 00:16:04.069 "rw_ios_per_sec": 0, 00:16:04.069 "rw_mbytes_per_sec": 0, 00:16:04.069 "r_mbytes_per_sec": 0, 00:16:04.069 "w_mbytes_per_sec": 0 00:16:04.069 }, 00:16:04.069 "claimed": true, 00:16:04.069 "claim_type": "exclusive_write", 00:16:04.069 "zoned": false, 00:16:04.069 "supported_io_types": { 00:16:04.069 "read": true, 00:16:04.069 "write": true, 00:16:04.069 "unmap": true, 00:16:04.069 "flush": true, 00:16:04.069 "reset": true, 00:16:04.069 "nvme_admin": false, 00:16:04.069 "nvme_io": false, 00:16:04.069 "nvme_io_md": false, 00:16:04.069 "write_zeroes": true, 00:16:04.069 "zcopy": true, 00:16:04.069 "get_zone_info": false, 00:16:04.069 "zone_management": false, 00:16:04.069 "zone_append": false, 00:16:04.069 "compare": false, 00:16:04.069 "compare_and_write": false, 00:16:04.069 "abort": true, 00:16:04.069 "seek_hole": false, 00:16:04.069 "seek_data": false, 00:16:04.069 "copy": true, 00:16:04.069 "nvme_iov_md": false 00:16:04.069 }, 00:16:04.069 "memory_domains": [ 00:16:04.069 { 00:16:04.069 "dma_device_id": "system", 00:16:04.069 "dma_device_type": 1 00:16:04.069 }, 00:16:04.069 { 00:16:04.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.069 "dma_device_type": 2 00:16:04.069 } 00:16:04.069 ], 00:16:04.069 "driver_specific": {} 00:16:04.069 } 00:16:04.069 ] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.069 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.069 "name": "Existed_Raid", 00:16:04.069 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:04.069 "strip_size_kb": 64, 00:16:04.069 "state": "online", 00:16:04.069 "raid_level": "raid5f", 00:16:04.069 "superblock": true, 00:16:04.069 "num_base_bdevs": 4, 00:16:04.069 "num_base_bdevs_discovered": 4, 00:16:04.069 "num_base_bdevs_operational": 4, 00:16:04.069 "base_bdevs_list": [ 00:16:04.069 { 00:16:04.069 "name": "BaseBdev1", 00:16:04.069 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:04.069 "is_configured": true, 00:16:04.069 "data_offset": 2048, 00:16:04.069 "data_size": 63488 00:16:04.069 }, 00:16:04.069 { 00:16:04.069 "name": "BaseBdev2", 00:16:04.069 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:04.069 "is_configured": true, 00:16:04.069 "data_offset": 2048, 00:16:04.069 "data_size": 63488 00:16:04.069 }, 00:16:04.069 { 00:16:04.069 "name": "BaseBdev3", 00:16:04.069 "uuid": "2cb592c0-e4ea-4997-bc8f-f06e729884dd", 00:16:04.069 "is_configured": true, 00:16:04.069 "data_offset": 2048, 00:16:04.069 "data_size": 63488 00:16:04.069 }, 00:16:04.069 { 00:16:04.069 "name": "BaseBdev4", 00:16:04.069 "uuid": "499e5177-c041-4348-a910-13906759f891", 00:16:04.069 "is_configured": true, 00:16:04.069 "data_offset": 2048, 00:16:04.069 "data_size": 63488 00:16:04.069 } 00:16:04.069 ] 00:16:04.069 }' 00:16:04.070 08:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.070 08:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.329 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.329 [2024-09-28 08:53:42.278312] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.330 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.330 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.330 "name": "Existed_Raid", 00:16:04.330 "aliases": [ 00:16:04.330 "02f9bc3d-c195-4521-8c60-b054cbe280e8" 00:16:04.330 ], 00:16:04.330 "product_name": "Raid Volume", 00:16:04.330 "block_size": 512, 00:16:04.330 "num_blocks": 190464, 00:16:04.330 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:04.330 "assigned_rate_limits": { 00:16:04.330 "rw_ios_per_sec": 0, 00:16:04.330 "rw_mbytes_per_sec": 0, 00:16:04.330 "r_mbytes_per_sec": 0, 00:16:04.330 "w_mbytes_per_sec": 0 00:16:04.330 }, 00:16:04.330 "claimed": false, 00:16:04.330 "zoned": false, 00:16:04.330 "supported_io_types": { 00:16:04.330 "read": true, 00:16:04.330 "write": true, 00:16:04.330 "unmap": false, 00:16:04.330 "flush": false, 00:16:04.330 "reset": true, 00:16:04.330 "nvme_admin": false, 00:16:04.330 "nvme_io": false, 00:16:04.330 "nvme_io_md": false, 00:16:04.330 "write_zeroes": true, 00:16:04.330 "zcopy": false, 00:16:04.330 "get_zone_info": false, 00:16:04.330 "zone_management": false, 00:16:04.330 "zone_append": false, 00:16:04.330 "compare": false, 00:16:04.330 "compare_and_write": false, 00:16:04.330 "abort": false, 00:16:04.330 "seek_hole": false, 00:16:04.330 "seek_data": false, 00:16:04.330 "copy": false, 00:16:04.330 "nvme_iov_md": false 00:16:04.330 }, 00:16:04.330 "driver_specific": { 00:16:04.330 "raid": { 00:16:04.330 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:04.330 "strip_size_kb": 64, 00:16:04.330 "state": "online", 00:16:04.330 "raid_level": "raid5f", 00:16:04.330 "superblock": true, 00:16:04.330 "num_base_bdevs": 4, 00:16:04.330 "num_base_bdevs_discovered": 4, 00:16:04.330 "num_base_bdevs_operational": 4, 00:16:04.330 "base_bdevs_list": [ 00:16:04.330 { 00:16:04.330 "name": "BaseBdev1", 00:16:04.330 "uuid": "4fffa97e-3b08-4a5d-9bb5-1d7affd5d4ac", 00:16:04.330 "is_configured": true, 00:16:04.330 "data_offset": 2048, 00:16:04.330 "data_size": 63488 00:16:04.330 }, 00:16:04.330 { 00:16:04.330 "name": "BaseBdev2", 00:16:04.330 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:04.330 "is_configured": true, 00:16:04.330 "data_offset": 2048, 00:16:04.330 "data_size": 63488 00:16:04.330 }, 00:16:04.330 { 00:16:04.330 "name": "BaseBdev3", 00:16:04.330 "uuid": "2cb592c0-e4ea-4997-bc8f-f06e729884dd", 00:16:04.330 "is_configured": true, 00:16:04.330 "data_offset": 2048, 00:16:04.330 "data_size": 63488 00:16:04.330 }, 00:16:04.330 { 00:16:04.330 "name": "BaseBdev4", 00:16:04.330 "uuid": "499e5177-c041-4348-a910-13906759f891", 00:16:04.330 "is_configured": true, 00:16:04.330 "data_offset": 2048, 00:16:04.330 "data_size": 63488 00:16:04.330 } 00:16:04.330 ] 00:16:04.330 } 00:16:04.330 } 00:16:04.330 }' 00:16:04.330 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:04.590 BaseBdev2 00:16:04.590 BaseBdev3 00:16:04.590 BaseBdev4' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.590 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.590 [2024-09-28 08:53:42.581713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:04.850 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.851 "name": "Existed_Raid", 00:16:04.851 "uuid": "02f9bc3d-c195-4521-8c60-b054cbe280e8", 00:16:04.851 "strip_size_kb": 64, 00:16:04.851 "state": "online", 00:16:04.851 "raid_level": "raid5f", 00:16:04.851 "superblock": true, 00:16:04.851 "num_base_bdevs": 4, 00:16:04.851 "num_base_bdevs_discovered": 3, 00:16:04.851 "num_base_bdevs_operational": 3, 00:16:04.851 "base_bdevs_list": [ 00:16:04.851 { 00:16:04.851 "name": null, 00:16:04.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.851 "is_configured": false, 00:16:04.851 "data_offset": 0, 00:16:04.851 "data_size": 63488 00:16:04.851 }, 00:16:04.851 { 00:16:04.851 "name": "BaseBdev2", 00:16:04.851 "uuid": "4cd3d8da-ddde-40f5-9866-ec90a0a089ce", 00:16:04.851 "is_configured": true, 00:16:04.851 "data_offset": 2048, 00:16:04.851 "data_size": 63488 00:16:04.851 }, 00:16:04.851 { 00:16:04.851 "name": "BaseBdev3", 00:16:04.851 "uuid": "2cb592c0-e4ea-4997-bc8f-f06e729884dd", 00:16:04.851 "is_configured": true, 00:16:04.851 "data_offset": 2048, 00:16:04.851 "data_size": 63488 00:16:04.851 }, 00:16:04.851 { 00:16:04.851 "name": "BaseBdev4", 00:16:04.851 "uuid": "499e5177-c041-4348-a910-13906759f891", 00:16:04.851 "is_configured": true, 00:16:04.851 "data_offset": 2048, 00:16:04.851 "data_size": 63488 00:16:04.851 } 00:16:04.851 ] 00:16:04.851 }' 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.851 08:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.420 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.421 [2024-09-28 08:53:43.185271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.421 [2024-09-28 08:53:43.185509] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.421 [2024-09-28 08:53:43.284271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.421 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.421 [2024-09-28 08:53:43.344184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.708 [2024-09-28 08:53:43.503164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:05.708 [2024-09-28 08:53:43.503296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.708 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 BaseBdev2 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 [ 00:16:06.000 { 00:16:06.000 "name": "BaseBdev2", 00:16:06.000 "aliases": [ 00:16:06.000 "45c6529c-eec1-4047-9332-8413f89b87bd" 00:16:06.000 ], 00:16:06.000 "product_name": "Malloc disk", 00:16:06.000 "block_size": 512, 00:16:06.000 "num_blocks": 65536, 00:16:06.000 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:06.000 "assigned_rate_limits": { 00:16:06.000 "rw_ios_per_sec": 0, 00:16:06.000 "rw_mbytes_per_sec": 0, 00:16:06.000 "r_mbytes_per_sec": 0, 00:16:06.000 "w_mbytes_per_sec": 0 00:16:06.000 }, 00:16:06.000 "claimed": false, 00:16:06.000 "zoned": false, 00:16:06.000 "supported_io_types": { 00:16:06.000 "read": true, 00:16:06.000 "write": true, 00:16:06.000 "unmap": true, 00:16:06.000 "flush": true, 00:16:06.000 "reset": true, 00:16:06.000 "nvme_admin": false, 00:16:06.000 "nvme_io": false, 00:16:06.000 "nvme_io_md": false, 00:16:06.000 "write_zeroes": true, 00:16:06.000 "zcopy": true, 00:16:06.000 "get_zone_info": false, 00:16:06.000 "zone_management": false, 00:16:06.000 "zone_append": false, 00:16:06.000 "compare": false, 00:16:06.000 "compare_and_write": false, 00:16:06.000 "abort": true, 00:16:06.000 "seek_hole": false, 00:16:06.000 "seek_data": false, 00:16:06.000 "copy": true, 00:16:06.000 "nvme_iov_md": false 00:16:06.000 }, 00:16:06.000 "memory_domains": [ 00:16:06.000 { 00:16:06.000 "dma_device_id": "system", 00:16:06.000 "dma_device_type": 1 00:16:06.000 }, 00:16:06.000 { 00:16:06.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.000 "dma_device_type": 2 00:16:06.000 } 00:16:06.000 ], 00:16:06.000 "driver_specific": {} 00:16:06.000 } 00:16:06.000 ] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 BaseBdev3 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 [ 00:16:06.000 { 00:16:06.000 "name": "BaseBdev3", 00:16:06.000 "aliases": [ 00:16:06.000 "9809a18d-d2d2-4531-b900-62ea3a28046f" 00:16:06.000 ], 00:16:06.000 "product_name": "Malloc disk", 00:16:06.000 "block_size": 512, 00:16:06.000 "num_blocks": 65536, 00:16:06.000 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:06.000 "assigned_rate_limits": { 00:16:06.000 "rw_ios_per_sec": 0, 00:16:06.000 "rw_mbytes_per_sec": 0, 00:16:06.000 "r_mbytes_per_sec": 0, 00:16:06.000 "w_mbytes_per_sec": 0 00:16:06.000 }, 00:16:06.000 "claimed": false, 00:16:06.000 "zoned": false, 00:16:06.000 "supported_io_types": { 00:16:06.000 "read": true, 00:16:06.000 "write": true, 00:16:06.000 "unmap": true, 00:16:06.000 "flush": true, 00:16:06.000 "reset": true, 00:16:06.000 "nvme_admin": false, 00:16:06.000 "nvme_io": false, 00:16:06.000 "nvme_io_md": false, 00:16:06.000 "write_zeroes": true, 00:16:06.000 "zcopy": true, 00:16:06.000 "get_zone_info": false, 00:16:06.000 "zone_management": false, 00:16:06.000 "zone_append": false, 00:16:06.000 "compare": false, 00:16:06.000 "compare_and_write": false, 00:16:06.000 "abort": true, 00:16:06.000 "seek_hole": false, 00:16:06.000 "seek_data": false, 00:16:06.000 "copy": true, 00:16:06.000 "nvme_iov_md": false 00:16:06.000 }, 00:16:06.000 "memory_domains": [ 00:16:06.000 { 00:16:06.000 "dma_device_id": "system", 00:16:06.000 "dma_device_type": 1 00:16:06.000 }, 00:16:06.000 { 00:16:06.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.000 "dma_device_type": 2 00:16:06.000 } 00:16:06.000 ], 00:16:06.000 "driver_specific": {} 00:16:06.000 } 00:16:06.000 ] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 BaseBdev4 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.000 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.000 [ 00:16:06.000 { 00:16:06.000 "name": "BaseBdev4", 00:16:06.000 "aliases": [ 00:16:06.000 "f08da09c-27aa-4062-a3f5-79db35cd4b61" 00:16:06.000 ], 00:16:06.000 "product_name": "Malloc disk", 00:16:06.000 "block_size": 512, 00:16:06.000 "num_blocks": 65536, 00:16:06.000 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:06.001 "assigned_rate_limits": { 00:16:06.001 "rw_ios_per_sec": 0, 00:16:06.001 "rw_mbytes_per_sec": 0, 00:16:06.001 "r_mbytes_per_sec": 0, 00:16:06.001 "w_mbytes_per_sec": 0 00:16:06.001 }, 00:16:06.001 "claimed": false, 00:16:06.001 "zoned": false, 00:16:06.001 "supported_io_types": { 00:16:06.001 "read": true, 00:16:06.001 "write": true, 00:16:06.001 "unmap": true, 00:16:06.001 "flush": true, 00:16:06.001 "reset": true, 00:16:06.001 "nvme_admin": false, 00:16:06.001 "nvme_io": false, 00:16:06.001 "nvme_io_md": false, 00:16:06.001 "write_zeroes": true, 00:16:06.001 "zcopy": true, 00:16:06.001 "get_zone_info": false, 00:16:06.001 "zone_management": false, 00:16:06.001 "zone_append": false, 00:16:06.001 "compare": false, 00:16:06.001 "compare_and_write": false, 00:16:06.001 "abort": true, 00:16:06.001 "seek_hole": false, 00:16:06.001 "seek_data": false, 00:16:06.001 "copy": true, 00:16:06.001 "nvme_iov_md": false 00:16:06.001 }, 00:16:06.001 "memory_domains": [ 00:16:06.001 { 00:16:06.001 "dma_device_id": "system", 00:16:06.001 "dma_device_type": 1 00:16:06.001 }, 00:16:06.001 { 00:16:06.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.001 "dma_device_type": 2 00:16:06.001 } 00:16:06.001 ], 00:16:06.001 "driver_specific": {} 00:16:06.001 } 00:16:06.001 ] 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.001 [2024-09-28 08:53:43.911503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.001 [2024-09-28 08:53:43.911617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.001 [2024-09-28 08:53:43.911666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.001 [2024-09-28 08:53:43.913712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.001 [2024-09-28 08:53:43.913805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.001 "name": "Existed_Raid", 00:16:06.001 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:06.001 "strip_size_kb": 64, 00:16:06.001 "state": "configuring", 00:16:06.001 "raid_level": "raid5f", 00:16:06.001 "superblock": true, 00:16:06.001 "num_base_bdevs": 4, 00:16:06.001 "num_base_bdevs_discovered": 3, 00:16:06.001 "num_base_bdevs_operational": 4, 00:16:06.001 "base_bdevs_list": [ 00:16:06.001 { 00:16:06.001 "name": "BaseBdev1", 00:16:06.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.001 "is_configured": false, 00:16:06.001 "data_offset": 0, 00:16:06.001 "data_size": 0 00:16:06.001 }, 00:16:06.001 { 00:16:06.001 "name": "BaseBdev2", 00:16:06.001 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:06.001 "is_configured": true, 00:16:06.001 "data_offset": 2048, 00:16:06.001 "data_size": 63488 00:16:06.001 }, 00:16:06.001 { 00:16:06.001 "name": "BaseBdev3", 00:16:06.001 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:06.001 "is_configured": true, 00:16:06.001 "data_offset": 2048, 00:16:06.001 "data_size": 63488 00:16:06.001 }, 00:16:06.001 { 00:16:06.001 "name": "BaseBdev4", 00:16:06.001 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:06.001 "is_configured": true, 00:16:06.001 "data_offset": 2048, 00:16:06.001 "data_size": 63488 00:16:06.001 } 00:16:06.001 ] 00:16:06.001 }' 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.001 08:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.582 [2024-09-28 08:53:44.370799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.582 "name": "Existed_Raid", 00:16:06.582 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:06.582 "strip_size_kb": 64, 00:16:06.582 "state": "configuring", 00:16:06.582 "raid_level": "raid5f", 00:16:06.582 "superblock": true, 00:16:06.582 "num_base_bdevs": 4, 00:16:06.582 "num_base_bdevs_discovered": 2, 00:16:06.582 "num_base_bdevs_operational": 4, 00:16:06.582 "base_bdevs_list": [ 00:16:06.582 { 00:16:06.582 "name": "BaseBdev1", 00:16:06.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.582 "is_configured": false, 00:16:06.582 "data_offset": 0, 00:16:06.582 "data_size": 0 00:16:06.582 }, 00:16:06.582 { 00:16:06.582 "name": null, 00:16:06.582 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:06.582 "is_configured": false, 00:16:06.582 "data_offset": 0, 00:16:06.582 "data_size": 63488 00:16:06.582 }, 00:16:06.582 { 00:16:06.582 "name": "BaseBdev3", 00:16:06.582 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:06.582 "is_configured": true, 00:16:06.582 "data_offset": 2048, 00:16:06.582 "data_size": 63488 00:16:06.582 }, 00:16:06.582 { 00:16:06.582 "name": "BaseBdev4", 00:16:06.582 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:06.582 "is_configured": true, 00:16:06.582 "data_offset": 2048, 00:16:06.582 "data_size": 63488 00:16:06.582 } 00:16:06.582 ] 00:16:06.582 }' 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.582 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 [2024-09-28 08:53:44.914750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.152 BaseBdev1 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 [ 00:16:07.152 { 00:16:07.152 "name": "BaseBdev1", 00:16:07.152 "aliases": [ 00:16:07.152 "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d" 00:16:07.152 ], 00:16:07.152 "product_name": "Malloc disk", 00:16:07.152 "block_size": 512, 00:16:07.152 "num_blocks": 65536, 00:16:07.152 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:07.152 "assigned_rate_limits": { 00:16:07.152 "rw_ios_per_sec": 0, 00:16:07.152 "rw_mbytes_per_sec": 0, 00:16:07.152 "r_mbytes_per_sec": 0, 00:16:07.152 "w_mbytes_per_sec": 0 00:16:07.152 }, 00:16:07.152 "claimed": true, 00:16:07.152 "claim_type": "exclusive_write", 00:16:07.152 "zoned": false, 00:16:07.152 "supported_io_types": { 00:16:07.152 "read": true, 00:16:07.152 "write": true, 00:16:07.152 "unmap": true, 00:16:07.152 "flush": true, 00:16:07.152 "reset": true, 00:16:07.152 "nvme_admin": false, 00:16:07.152 "nvme_io": false, 00:16:07.152 "nvme_io_md": false, 00:16:07.152 "write_zeroes": true, 00:16:07.152 "zcopy": true, 00:16:07.152 "get_zone_info": false, 00:16:07.152 "zone_management": false, 00:16:07.152 "zone_append": false, 00:16:07.152 "compare": false, 00:16:07.152 "compare_and_write": false, 00:16:07.152 "abort": true, 00:16:07.152 "seek_hole": false, 00:16:07.152 "seek_data": false, 00:16:07.152 "copy": true, 00:16:07.152 "nvme_iov_md": false 00:16:07.152 }, 00:16:07.152 "memory_domains": [ 00:16:07.152 { 00:16:07.152 "dma_device_id": "system", 00:16:07.152 "dma_device_type": 1 00:16:07.152 }, 00:16:07.152 { 00:16:07.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.152 "dma_device_type": 2 00:16:07.152 } 00:16:07.152 ], 00:16:07.152 "driver_specific": {} 00:16:07.152 } 00:16:07.152 ] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.152 08:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.152 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.152 "name": "Existed_Raid", 00:16:07.152 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:07.152 "strip_size_kb": 64, 00:16:07.152 "state": "configuring", 00:16:07.152 "raid_level": "raid5f", 00:16:07.152 "superblock": true, 00:16:07.152 "num_base_bdevs": 4, 00:16:07.152 "num_base_bdevs_discovered": 3, 00:16:07.152 "num_base_bdevs_operational": 4, 00:16:07.152 "base_bdevs_list": [ 00:16:07.152 { 00:16:07.152 "name": "BaseBdev1", 00:16:07.152 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:07.152 "is_configured": true, 00:16:07.152 "data_offset": 2048, 00:16:07.152 "data_size": 63488 00:16:07.152 }, 00:16:07.152 { 00:16:07.152 "name": null, 00:16:07.152 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:07.152 "is_configured": false, 00:16:07.152 "data_offset": 0, 00:16:07.152 "data_size": 63488 00:16:07.152 }, 00:16:07.152 { 00:16:07.152 "name": "BaseBdev3", 00:16:07.152 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:07.152 "is_configured": true, 00:16:07.152 "data_offset": 2048, 00:16:07.152 "data_size": 63488 00:16:07.152 }, 00:16:07.152 { 00:16:07.152 "name": "BaseBdev4", 00:16:07.152 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:07.152 "is_configured": true, 00:16:07.152 "data_offset": 2048, 00:16:07.152 "data_size": 63488 00:16:07.152 } 00:16:07.152 ] 00:16:07.152 }' 00:16:07.152 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.152 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.412 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:07.412 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.412 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.412 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.671 [2024-09-28 08:53:45.425914] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.671 "name": "Existed_Raid", 00:16:07.671 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:07.671 "strip_size_kb": 64, 00:16:07.671 "state": "configuring", 00:16:07.671 "raid_level": "raid5f", 00:16:07.671 "superblock": true, 00:16:07.671 "num_base_bdevs": 4, 00:16:07.671 "num_base_bdevs_discovered": 2, 00:16:07.671 "num_base_bdevs_operational": 4, 00:16:07.671 "base_bdevs_list": [ 00:16:07.671 { 00:16:07.671 "name": "BaseBdev1", 00:16:07.671 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:07.671 "is_configured": true, 00:16:07.671 "data_offset": 2048, 00:16:07.671 "data_size": 63488 00:16:07.671 }, 00:16:07.671 { 00:16:07.671 "name": null, 00:16:07.671 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:07.671 "is_configured": false, 00:16:07.671 "data_offset": 0, 00:16:07.671 "data_size": 63488 00:16:07.671 }, 00:16:07.671 { 00:16:07.671 "name": null, 00:16:07.671 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:07.671 "is_configured": false, 00:16:07.671 "data_offset": 0, 00:16:07.671 "data_size": 63488 00:16:07.671 }, 00:16:07.671 { 00:16:07.671 "name": "BaseBdev4", 00:16:07.671 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:07.671 "is_configured": true, 00:16:07.671 "data_offset": 2048, 00:16:07.671 "data_size": 63488 00:16:07.671 } 00:16:07.671 ] 00:16:07.671 }' 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.671 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.930 [2024-09-28 08:53:45.913066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.930 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.189 "name": "Existed_Raid", 00:16:08.189 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:08.189 "strip_size_kb": 64, 00:16:08.189 "state": "configuring", 00:16:08.189 "raid_level": "raid5f", 00:16:08.189 "superblock": true, 00:16:08.189 "num_base_bdevs": 4, 00:16:08.189 "num_base_bdevs_discovered": 3, 00:16:08.189 "num_base_bdevs_operational": 4, 00:16:08.189 "base_bdevs_list": [ 00:16:08.189 { 00:16:08.189 "name": "BaseBdev1", 00:16:08.189 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:08.189 "is_configured": true, 00:16:08.189 "data_offset": 2048, 00:16:08.189 "data_size": 63488 00:16:08.189 }, 00:16:08.189 { 00:16:08.189 "name": null, 00:16:08.189 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:08.189 "is_configured": false, 00:16:08.189 "data_offset": 0, 00:16:08.189 "data_size": 63488 00:16:08.189 }, 00:16:08.189 { 00:16:08.189 "name": "BaseBdev3", 00:16:08.189 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:08.189 "is_configured": true, 00:16:08.189 "data_offset": 2048, 00:16:08.189 "data_size": 63488 00:16:08.189 }, 00:16:08.189 { 00:16:08.189 "name": "BaseBdev4", 00:16:08.189 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:08.189 "is_configured": true, 00:16:08.189 "data_offset": 2048, 00:16:08.189 "data_size": 63488 00:16:08.189 } 00:16:08.189 ] 00:16:08.189 }' 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.189 08:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.449 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.449 [2024-09-28 08:53:46.436207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.708 "name": "Existed_Raid", 00:16:08.708 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:08.708 "strip_size_kb": 64, 00:16:08.708 "state": "configuring", 00:16:08.708 "raid_level": "raid5f", 00:16:08.708 "superblock": true, 00:16:08.708 "num_base_bdevs": 4, 00:16:08.708 "num_base_bdevs_discovered": 2, 00:16:08.708 "num_base_bdevs_operational": 4, 00:16:08.708 "base_bdevs_list": [ 00:16:08.708 { 00:16:08.708 "name": null, 00:16:08.708 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:08.708 "is_configured": false, 00:16:08.708 "data_offset": 0, 00:16:08.708 "data_size": 63488 00:16:08.708 }, 00:16:08.708 { 00:16:08.708 "name": null, 00:16:08.708 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:08.708 "is_configured": false, 00:16:08.708 "data_offset": 0, 00:16:08.708 "data_size": 63488 00:16:08.708 }, 00:16:08.708 { 00:16:08.708 "name": "BaseBdev3", 00:16:08.708 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:08.708 "is_configured": true, 00:16:08.708 "data_offset": 2048, 00:16:08.708 "data_size": 63488 00:16:08.708 }, 00:16:08.708 { 00:16:08.708 "name": "BaseBdev4", 00:16:08.708 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:08.708 "is_configured": true, 00:16:08.708 "data_offset": 2048, 00:16:08.708 "data_size": 63488 00:16:08.708 } 00:16:08.708 ] 00:16:08.708 }' 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.708 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.276 08:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.276 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.276 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 08:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 [2024-09-28 08:53:47.011697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.276 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.276 "name": "Existed_Raid", 00:16:09.276 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:09.276 "strip_size_kb": 64, 00:16:09.276 "state": "configuring", 00:16:09.276 "raid_level": "raid5f", 00:16:09.276 "superblock": true, 00:16:09.276 "num_base_bdevs": 4, 00:16:09.276 "num_base_bdevs_discovered": 3, 00:16:09.276 "num_base_bdevs_operational": 4, 00:16:09.276 "base_bdevs_list": [ 00:16:09.276 { 00:16:09.276 "name": null, 00:16:09.276 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:09.276 "is_configured": false, 00:16:09.276 "data_offset": 0, 00:16:09.276 "data_size": 63488 00:16:09.276 }, 00:16:09.276 { 00:16:09.276 "name": "BaseBdev2", 00:16:09.276 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:09.276 "is_configured": true, 00:16:09.276 "data_offset": 2048, 00:16:09.276 "data_size": 63488 00:16:09.276 }, 00:16:09.276 { 00:16:09.276 "name": "BaseBdev3", 00:16:09.277 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:09.277 "is_configured": true, 00:16:09.277 "data_offset": 2048, 00:16:09.277 "data_size": 63488 00:16:09.277 }, 00:16:09.277 { 00:16:09.277 "name": "BaseBdev4", 00:16:09.277 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:09.277 "is_configured": true, 00:16:09.277 "data_offset": 2048, 00:16:09.277 "data_size": 63488 00:16:09.277 } 00:16:09.277 ] 00:16:09.277 }' 00:16:09.277 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.277 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.536 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 [2024-09-28 08:53:47.607566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:09.796 [2024-09-28 08:53:47.607901] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:09.796 [2024-09-28 08:53:47.607919] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.796 [2024-09-28 08:53:47.608198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:09.796 NewBaseBdev 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 [2024-09-28 08:53:47.615201] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:09.796 [2024-09-28 08:53:47.615265] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:09.796 [2024-09-28 08:53:47.615588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 [ 00:16:09.796 { 00:16:09.796 "name": "NewBaseBdev", 00:16:09.796 "aliases": [ 00:16:09.796 "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d" 00:16:09.796 ], 00:16:09.796 "product_name": "Malloc disk", 00:16:09.796 "block_size": 512, 00:16:09.796 "num_blocks": 65536, 00:16:09.796 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:09.796 "assigned_rate_limits": { 00:16:09.796 "rw_ios_per_sec": 0, 00:16:09.796 "rw_mbytes_per_sec": 0, 00:16:09.796 "r_mbytes_per_sec": 0, 00:16:09.796 "w_mbytes_per_sec": 0 00:16:09.796 }, 00:16:09.796 "claimed": true, 00:16:09.796 "claim_type": "exclusive_write", 00:16:09.796 "zoned": false, 00:16:09.796 "supported_io_types": { 00:16:09.796 "read": true, 00:16:09.796 "write": true, 00:16:09.796 "unmap": true, 00:16:09.796 "flush": true, 00:16:09.796 "reset": true, 00:16:09.796 "nvme_admin": false, 00:16:09.796 "nvme_io": false, 00:16:09.796 "nvme_io_md": false, 00:16:09.796 "write_zeroes": true, 00:16:09.796 "zcopy": true, 00:16:09.796 "get_zone_info": false, 00:16:09.796 "zone_management": false, 00:16:09.796 "zone_append": false, 00:16:09.796 "compare": false, 00:16:09.796 "compare_and_write": false, 00:16:09.796 "abort": true, 00:16:09.796 "seek_hole": false, 00:16:09.796 "seek_data": false, 00:16:09.796 "copy": true, 00:16:09.796 "nvme_iov_md": false 00:16:09.796 }, 00:16:09.796 "memory_domains": [ 00:16:09.796 { 00:16:09.796 "dma_device_id": "system", 00:16:09.796 "dma_device_type": 1 00:16:09.796 }, 00:16:09.796 { 00:16:09.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.796 "dma_device_type": 2 00:16:09.796 } 00:16:09.796 ], 00:16:09.796 "driver_specific": {} 00:16:09.796 } 00:16:09.796 ] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.796 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.797 "name": "Existed_Raid", 00:16:09.797 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:09.797 "strip_size_kb": 64, 00:16:09.797 "state": "online", 00:16:09.797 "raid_level": "raid5f", 00:16:09.797 "superblock": true, 00:16:09.797 "num_base_bdevs": 4, 00:16:09.797 "num_base_bdevs_discovered": 4, 00:16:09.797 "num_base_bdevs_operational": 4, 00:16:09.797 "base_bdevs_list": [ 00:16:09.797 { 00:16:09.797 "name": "NewBaseBdev", 00:16:09.797 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:09.797 "is_configured": true, 00:16:09.797 "data_offset": 2048, 00:16:09.797 "data_size": 63488 00:16:09.797 }, 00:16:09.797 { 00:16:09.797 "name": "BaseBdev2", 00:16:09.797 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:09.797 "is_configured": true, 00:16:09.797 "data_offset": 2048, 00:16:09.797 "data_size": 63488 00:16:09.797 }, 00:16:09.797 { 00:16:09.797 "name": "BaseBdev3", 00:16:09.797 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:09.797 "is_configured": true, 00:16:09.797 "data_offset": 2048, 00:16:09.797 "data_size": 63488 00:16:09.797 }, 00:16:09.797 { 00:16:09.797 "name": "BaseBdev4", 00:16:09.797 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:09.797 "is_configured": true, 00:16:09.797 "data_offset": 2048, 00:16:09.797 "data_size": 63488 00:16:09.797 } 00:16:09.797 ] 00:16:09.797 }' 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.797 08:53:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.364 [2024-09-28 08:53:48.087785] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.364 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.364 "name": "Existed_Raid", 00:16:10.364 "aliases": [ 00:16:10.364 "f64d3c8f-4965-46b8-b923-eaf9784793b9" 00:16:10.364 ], 00:16:10.364 "product_name": "Raid Volume", 00:16:10.364 "block_size": 512, 00:16:10.364 "num_blocks": 190464, 00:16:10.364 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:10.364 "assigned_rate_limits": { 00:16:10.364 "rw_ios_per_sec": 0, 00:16:10.364 "rw_mbytes_per_sec": 0, 00:16:10.364 "r_mbytes_per_sec": 0, 00:16:10.364 "w_mbytes_per_sec": 0 00:16:10.364 }, 00:16:10.364 "claimed": false, 00:16:10.364 "zoned": false, 00:16:10.364 "supported_io_types": { 00:16:10.364 "read": true, 00:16:10.364 "write": true, 00:16:10.364 "unmap": false, 00:16:10.364 "flush": false, 00:16:10.365 "reset": true, 00:16:10.365 "nvme_admin": false, 00:16:10.365 "nvme_io": false, 00:16:10.365 "nvme_io_md": false, 00:16:10.365 "write_zeroes": true, 00:16:10.365 "zcopy": false, 00:16:10.365 "get_zone_info": false, 00:16:10.365 "zone_management": false, 00:16:10.365 "zone_append": false, 00:16:10.365 "compare": false, 00:16:10.365 "compare_and_write": false, 00:16:10.365 "abort": false, 00:16:10.365 "seek_hole": false, 00:16:10.365 "seek_data": false, 00:16:10.365 "copy": false, 00:16:10.365 "nvme_iov_md": false 00:16:10.365 }, 00:16:10.365 "driver_specific": { 00:16:10.365 "raid": { 00:16:10.365 "uuid": "f64d3c8f-4965-46b8-b923-eaf9784793b9", 00:16:10.365 "strip_size_kb": 64, 00:16:10.365 "state": "online", 00:16:10.365 "raid_level": "raid5f", 00:16:10.365 "superblock": true, 00:16:10.365 "num_base_bdevs": 4, 00:16:10.365 "num_base_bdevs_discovered": 4, 00:16:10.365 "num_base_bdevs_operational": 4, 00:16:10.365 "base_bdevs_list": [ 00:16:10.365 { 00:16:10.365 "name": "NewBaseBdev", 00:16:10.365 "uuid": "62a95e9e-e8bb-45c8-9e7d-59d8b5dc423d", 00:16:10.365 "is_configured": true, 00:16:10.365 "data_offset": 2048, 00:16:10.365 "data_size": 63488 00:16:10.365 }, 00:16:10.365 { 00:16:10.365 "name": "BaseBdev2", 00:16:10.365 "uuid": "45c6529c-eec1-4047-9332-8413f89b87bd", 00:16:10.365 "is_configured": true, 00:16:10.365 "data_offset": 2048, 00:16:10.365 "data_size": 63488 00:16:10.365 }, 00:16:10.365 { 00:16:10.365 "name": "BaseBdev3", 00:16:10.365 "uuid": "9809a18d-d2d2-4531-b900-62ea3a28046f", 00:16:10.365 "is_configured": true, 00:16:10.365 "data_offset": 2048, 00:16:10.365 "data_size": 63488 00:16:10.365 }, 00:16:10.365 { 00:16:10.365 "name": "BaseBdev4", 00:16:10.365 "uuid": "f08da09c-27aa-4062-a3f5-79db35cd4b61", 00:16:10.365 "is_configured": true, 00:16:10.365 "data_offset": 2048, 00:16:10.365 "data_size": 63488 00:16:10.365 } 00:16:10.365 ] 00:16:10.365 } 00:16:10.365 } 00:16:10.365 }' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:10.365 BaseBdev2 00:16:10.365 BaseBdev3 00:16:10.365 BaseBdev4' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.365 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.625 [2024-09-28 08:53:48.399091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.625 [2024-09-28 08:53:48.399162] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.625 [2024-09-28 08:53:48.399234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.625 [2024-09-28 08:53:48.399576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.625 [2024-09-28 08:53:48.399588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83445 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83445 ']' 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83445 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83445 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83445' 00:16:10.625 killing process with pid 83445 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83445 00:16:10.625 [2024-09-28 08:53:48.450248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:10.625 08:53:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83445 00:16:10.894 [2024-09-28 08:53:48.860311] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.276 ************************************ 00:16:12.276 END TEST raid5f_state_function_test_sb 00:16:12.276 ************************************ 00:16:12.276 08:53:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:12.276 00:16:12.276 real 0m11.891s 00:16:12.276 user 0m18.410s 00:16:12.276 sys 0m2.395s 00:16:12.276 08:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.276 08:53:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 08:53:50 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:12.276 08:53:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:12.276 08:53:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.276 08:53:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 ************************************ 00:16:12.276 START TEST raid5f_superblock_test 00:16:12.276 ************************************ 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84123 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84123 00:16:12.276 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84123 ']' 00:16:12.536 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.536 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.536 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.536 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.536 08:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 [2024-09-28 08:53:50.354865] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:12.536 [2024-09-28 08:53:50.355083] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:16:12.536 [2024-09-28 08:53:50.519330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.796 [2024-09-28 08:53:50.762068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.055 [2024-09-28 08:53:50.994055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.056 [2024-09-28 08:53:50.994146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.315 malloc1 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.315 [2024-09-28 08:53:51.227279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.315 [2024-09-28 08:53:51.227428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.315 [2024-09-28 08:53:51.227468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:13.315 [2024-09-28 08:53:51.227500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.315 [2024-09-28 08:53:51.229855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.315 [2024-09-28 08:53:51.229888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.315 pt1 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.315 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.576 malloc2 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.576 [2024-09-28 08:53:51.317423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:13.576 [2024-09-28 08:53:51.317543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.576 [2024-09-28 08:53:51.317580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:13.576 [2024-09-28 08:53:51.317607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.576 [2024-09-28 08:53:51.319996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.576 [2024-09-28 08:53:51.320067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:13.576 pt2 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.576 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 malloc3 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 [2024-09-28 08:53:51.377289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:13.577 [2024-09-28 08:53:51.377384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.577 [2024-09-28 08:53:51.377419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:13.577 [2024-09-28 08:53:51.377446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.577 [2024-09-28 08:53:51.379769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.577 [2024-09-28 08:53:51.379839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:13.577 pt3 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 malloc4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 [2024-09-28 08:53:51.434565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:13.577 [2024-09-28 08:53:51.434665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.577 [2024-09-28 08:53:51.434700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:13.577 [2024-09-28 08:53:51.434728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.577 [2024-09-28 08:53:51.437038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.577 [2024-09-28 08:53:51.437105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:13.577 pt4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 [2024-09-28 08:53:51.446599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.577 [2024-09-28 08:53:51.448678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.577 [2024-09-28 08:53:51.448777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.577 [2024-09-28 08:53:51.448859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:13.577 [2024-09-28 08:53:51.449102] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:13.577 [2024-09-28 08:53:51.449155] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:13.577 [2024-09-28 08:53:51.449420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:13.577 [2024-09-28 08:53:51.456741] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:13.577 [2024-09-28 08:53:51.456796] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:13.577 [2024-09-28 08:53:51.457000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.577 "name": "raid_bdev1", 00:16:13.577 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:13.577 "strip_size_kb": 64, 00:16:13.577 "state": "online", 00:16:13.577 "raid_level": "raid5f", 00:16:13.577 "superblock": true, 00:16:13.577 "num_base_bdevs": 4, 00:16:13.577 "num_base_bdevs_discovered": 4, 00:16:13.577 "num_base_bdevs_operational": 4, 00:16:13.577 "base_bdevs_list": [ 00:16:13.577 { 00:16:13.577 "name": "pt1", 00:16:13.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:13.577 "is_configured": true, 00:16:13.577 "data_offset": 2048, 00:16:13.577 "data_size": 63488 00:16:13.577 }, 00:16:13.577 { 00:16:13.577 "name": "pt2", 00:16:13.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:13.577 "is_configured": true, 00:16:13.577 "data_offset": 2048, 00:16:13.577 "data_size": 63488 00:16:13.577 }, 00:16:13.577 { 00:16:13.577 "name": "pt3", 00:16:13.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:13.577 "is_configured": true, 00:16:13.577 "data_offset": 2048, 00:16:13.577 "data_size": 63488 00:16:13.577 }, 00:16:13.577 { 00:16:13.577 "name": "pt4", 00:16:13.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:13.577 "is_configured": true, 00:16:13.577 "data_offset": 2048, 00:16:13.577 "data_size": 63488 00:16:13.577 } 00:16:13.577 ] 00:16:13.577 }' 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.577 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 [2024-09-28 08:53:51.905207] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.146 "name": "raid_bdev1", 00:16:14.146 "aliases": [ 00:16:14.146 "78d6f8dc-59f7-4f93-bb37-b85f80295735" 00:16:14.146 ], 00:16:14.146 "product_name": "Raid Volume", 00:16:14.146 "block_size": 512, 00:16:14.146 "num_blocks": 190464, 00:16:14.146 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:14.146 "assigned_rate_limits": { 00:16:14.146 "rw_ios_per_sec": 0, 00:16:14.146 "rw_mbytes_per_sec": 0, 00:16:14.146 "r_mbytes_per_sec": 0, 00:16:14.146 "w_mbytes_per_sec": 0 00:16:14.146 }, 00:16:14.146 "claimed": false, 00:16:14.146 "zoned": false, 00:16:14.146 "supported_io_types": { 00:16:14.146 "read": true, 00:16:14.146 "write": true, 00:16:14.146 "unmap": false, 00:16:14.146 "flush": false, 00:16:14.146 "reset": true, 00:16:14.146 "nvme_admin": false, 00:16:14.146 "nvme_io": false, 00:16:14.146 "nvme_io_md": false, 00:16:14.146 "write_zeroes": true, 00:16:14.146 "zcopy": false, 00:16:14.146 "get_zone_info": false, 00:16:14.146 "zone_management": false, 00:16:14.146 "zone_append": false, 00:16:14.146 "compare": false, 00:16:14.146 "compare_and_write": false, 00:16:14.146 "abort": false, 00:16:14.146 "seek_hole": false, 00:16:14.146 "seek_data": false, 00:16:14.146 "copy": false, 00:16:14.146 "nvme_iov_md": false 00:16:14.146 }, 00:16:14.146 "driver_specific": { 00:16:14.146 "raid": { 00:16:14.147 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:14.147 "strip_size_kb": 64, 00:16:14.147 "state": "online", 00:16:14.147 "raid_level": "raid5f", 00:16:14.147 "superblock": true, 00:16:14.147 "num_base_bdevs": 4, 00:16:14.147 "num_base_bdevs_discovered": 4, 00:16:14.147 "num_base_bdevs_operational": 4, 00:16:14.147 "base_bdevs_list": [ 00:16:14.147 { 00:16:14.147 "name": "pt1", 00:16:14.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.147 "is_configured": true, 00:16:14.147 "data_offset": 2048, 00:16:14.147 "data_size": 63488 00:16:14.147 }, 00:16:14.147 { 00:16:14.147 "name": "pt2", 00:16:14.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.147 "is_configured": true, 00:16:14.147 "data_offset": 2048, 00:16:14.147 "data_size": 63488 00:16:14.147 }, 00:16:14.147 { 00:16:14.147 "name": "pt3", 00:16:14.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.147 "is_configured": true, 00:16:14.147 "data_offset": 2048, 00:16:14.147 "data_size": 63488 00:16:14.147 }, 00:16:14.147 { 00:16:14.147 "name": "pt4", 00:16:14.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:14.147 "is_configured": true, 00:16:14.147 "data_offset": 2048, 00:16:14.147 "data_size": 63488 00:16:14.147 } 00:16:14.147 ] 00:16:14.147 } 00:16:14.147 } 00:16:14.147 }' 00:16:14.147 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.147 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:14.147 pt2 00:16:14.147 pt3 00:16:14.147 pt4' 00:16:14.147 08:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.147 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 [2024-09-28 08:53:52.236612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78d6f8dc-59f7-4f93-bb37-b85f80295735 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 78d6f8dc-59f7-4f93-bb37-b85f80295735 ']' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 [2024-09-28 08:53:52.280384] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.406 [2024-09-28 08:53:52.280407] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.406 [2024-09-28 08:53:52.280479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.406 [2024-09-28 08:53:52.280553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.406 [2024-09-28 08:53:52.280568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:14.406 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.407 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.666 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.666 [2024-09-28 08:53:52.444119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.666 [2024-09-28 08:53:52.446131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.666 [2024-09-28 08:53:52.446177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:14.667 [2024-09-28 08:53:52.446209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:14.667 [2024-09-28 08:53:52.446256] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.667 [2024-09-28 08:53:52.446299] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.667 [2024-09-28 08:53:52.446317] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:14.667 [2024-09-28 08:53:52.446335] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:14.667 [2024-09-28 08:53:52.446348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.667 [2024-09-28 08:53:52.446359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:14.667 request: 00:16:14.667 { 00:16:14.667 "name": "raid_bdev1", 00:16:14.667 "raid_level": "raid5f", 00:16:14.667 "base_bdevs": [ 00:16:14.667 "malloc1", 00:16:14.667 "malloc2", 00:16:14.667 "malloc3", 00:16:14.667 "malloc4" 00:16:14.667 ], 00:16:14.667 "strip_size_kb": 64, 00:16:14.667 "superblock": false, 00:16:14.667 "method": "bdev_raid_create", 00:16:14.667 "req_id": 1 00:16:14.667 } 00:16:14.667 Got JSON-RPC error response 00:16:14.667 response: 00:16:14.667 { 00:16:14.667 "code": -17, 00:16:14.667 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.667 } 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.667 [2024-09-28 08:53:52.511981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:14.667 [2024-09-28 08:53:52.512068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.667 [2024-09-28 08:53:52.512098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:14.667 [2024-09-28 08:53:52.512127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.667 [2024-09-28 08:53:52.514454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.667 [2024-09-28 08:53:52.514525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:14.667 [2024-09-28 08:53:52.514608] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:14.667 [2024-09-28 08:53:52.514716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.667 pt1 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.667 "name": "raid_bdev1", 00:16:14.667 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:14.667 "strip_size_kb": 64, 00:16:14.667 "state": "configuring", 00:16:14.667 "raid_level": "raid5f", 00:16:14.667 "superblock": true, 00:16:14.667 "num_base_bdevs": 4, 00:16:14.667 "num_base_bdevs_discovered": 1, 00:16:14.667 "num_base_bdevs_operational": 4, 00:16:14.667 "base_bdevs_list": [ 00:16:14.667 { 00:16:14.667 "name": "pt1", 00:16:14.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:14.667 "is_configured": true, 00:16:14.667 "data_offset": 2048, 00:16:14.667 "data_size": 63488 00:16:14.667 }, 00:16:14.667 { 00:16:14.667 "name": null, 00:16:14.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:14.667 "is_configured": false, 00:16:14.667 "data_offset": 2048, 00:16:14.667 "data_size": 63488 00:16:14.667 }, 00:16:14.667 { 00:16:14.667 "name": null, 00:16:14.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:14.667 "is_configured": false, 00:16:14.667 "data_offset": 2048, 00:16:14.667 "data_size": 63488 00:16:14.667 }, 00:16:14.667 { 00:16:14.667 "name": null, 00:16:14.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:14.667 "is_configured": false, 00:16:14.667 "data_offset": 2048, 00:16:14.667 "data_size": 63488 00:16:14.667 } 00:16:14.667 ] 00:16:14.667 }' 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.667 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.236 [2024-09-28 08:53:52.951266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.236 [2024-09-28 08:53:52.951359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.236 [2024-09-28 08:53:52.951389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:15.236 [2024-09-28 08:53:52.951417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.236 [2024-09-28 08:53:52.951840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.236 [2024-09-28 08:53:52.951899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.236 [2024-09-28 08:53:52.951983] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.236 [2024-09-28 08:53:52.952030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.236 pt2 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.236 [2024-09-28 08:53:52.963265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.236 08:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.236 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.237 "name": "raid_bdev1", 00:16:15.237 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:15.237 "strip_size_kb": 64, 00:16:15.237 "state": "configuring", 00:16:15.237 "raid_level": "raid5f", 00:16:15.237 "superblock": true, 00:16:15.237 "num_base_bdevs": 4, 00:16:15.237 "num_base_bdevs_discovered": 1, 00:16:15.237 "num_base_bdevs_operational": 4, 00:16:15.237 "base_bdevs_list": [ 00:16:15.237 { 00:16:15.237 "name": "pt1", 00:16:15.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.237 "is_configured": true, 00:16:15.237 "data_offset": 2048, 00:16:15.237 "data_size": 63488 00:16:15.237 }, 00:16:15.237 { 00:16:15.237 "name": null, 00:16:15.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.237 "is_configured": false, 00:16:15.237 "data_offset": 0, 00:16:15.237 "data_size": 63488 00:16:15.237 }, 00:16:15.237 { 00:16:15.237 "name": null, 00:16:15.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.237 "is_configured": false, 00:16:15.237 "data_offset": 2048, 00:16:15.237 "data_size": 63488 00:16:15.237 }, 00:16:15.237 { 00:16:15.237 "name": null, 00:16:15.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.237 "is_configured": false, 00:16:15.237 "data_offset": 2048, 00:16:15.237 "data_size": 63488 00:16:15.237 } 00:16:15.237 ] 00:16:15.237 }' 00:16:15.237 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.237 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 [2024-09-28 08:53:53.418568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.497 [2024-09-28 08:53:53.418664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.497 [2024-09-28 08:53:53.418699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:15.497 [2024-09-28 08:53:53.418728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.497 [2024-09-28 08:53:53.419112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.497 [2024-09-28 08:53:53.419165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.497 [2024-09-28 08:53:53.419254] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:15.497 [2024-09-28 08:53:53.419306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.497 pt2 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 [2024-09-28 08:53:53.430537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.497 [2024-09-28 08:53:53.430580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.497 [2024-09-28 08:53:53.430596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:15.497 [2024-09-28 08:53:53.430603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.497 [2024-09-28 08:53:53.430930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.497 [2024-09-28 08:53:53.430946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.497 [2024-09-28 08:53:53.431001] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:15.497 [2024-09-28 08:53:53.431015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:15.497 pt3 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 [2024-09-28 08:53:53.442496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:15.497 [2024-09-28 08:53:53.442540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.497 [2024-09-28 08:53:53.442558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:15.497 [2024-09-28 08:53:53.442565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.497 [2024-09-28 08:53:53.442910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.497 [2024-09-28 08:53:53.442926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:15.497 [2024-09-28 08:53:53.442978] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:15.497 [2024-09-28 08:53:53.442999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:15.497 [2024-09-28 08:53:53.443127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.497 [2024-09-28 08:53:53.443142] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.497 [2024-09-28 08:53:53.443414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:15.497 [2024-09-28 08:53:53.449993] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.497 [2024-09-28 08:53:53.450016] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:15.497 [2024-09-28 08:53:53.450175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.497 pt4 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.497 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.757 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.757 "name": "raid_bdev1", 00:16:15.757 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:15.757 "strip_size_kb": 64, 00:16:15.757 "state": "online", 00:16:15.757 "raid_level": "raid5f", 00:16:15.757 "superblock": true, 00:16:15.757 "num_base_bdevs": 4, 00:16:15.757 "num_base_bdevs_discovered": 4, 00:16:15.757 "num_base_bdevs_operational": 4, 00:16:15.757 "base_bdevs_list": [ 00:16:15.757 { 00:16:15.757 "name": "pt1", 00:16:15.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.757 "is_configured": true, 00:16:15.757 "data_offset": 2048, 00:16:15.757 "data_size": 63488 00:16:15.757 }, 00:16:15.757 { 00:16:15.757 "name": "pt2", 00:16:15.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.757 "is_configured": true, 00:16:15.757 "data_offset": 2048, 00:16:15.757 "data_size": 63488 00:16:15.757 }, 00:16:15.757 { 00:16:15.757 "name": "pt3", 00:16:15.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.757 "is_configured": true, 00:16:15.757 "data_offset": 2048, 00:16:15.757 "data_size": 63488 00:16:15.757 }, 00:16:15.757 { 00:16:15.757 "name": "pt4", 00:16:15.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.757 "is_configured": true, 00:16:15.757 "data_offset": 2048, 00:16:15.757 "data_size": 63488 00:16:15.757 } 00:16:15.757 ] 00:16:15.757 }' 00:16:15.757 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.757 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.017 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.018 [2024-09-28 08:53:53.906622] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.018 "name": "raid_bdev1", 00:16:16.018 "aliases": [ 00:16:16.018 "78d6f8dc-59f7-4f93-bb37-b85f80295735" 00:16:16.018 ], 00:16:16.018 "product_name": "Raid Volume", 00:16:16.018 "block_size": 512, 00:16:16.018 "num_blocks": 190464, 00:16:16.018 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:16.018 "assigned_rate_limits": { 00:16:16.018 "rw_ios_per_sec": 0, 00:16:16.018 "rw_mbytes_per_sec": 0, 00:16:16.018 "r_mbytes_per_sec": 0, 00:16:16.018 "w_mbytes_per_sec": 0 00:16:16.018 }, 00:16:16.018 "claimed": false, 00:16:16.018 "zoned": false, 00:16:16.018 "supported_io_types": { 00:16:16.018 "read": true, 00:16:16.018 "write": true, 00:16:16.018 "unmap": false, 00:16:16.018 "flush": false, 00:16:16.018 "reset": true, 00:16:16.018 "nvme_admin": false, 00:16:16.018 "nvme_io": false, 00:16:16.018 "nvme_io_md": false, 00:16:16.018 "write_zeroes": true, 00:16:16.018 "zcopy": false, 00:16:16.018 "get_zone_info": false, 00:16:16.018 "zone_management": false, 00:16:16.018 "zone_append": false, 00:16:16.018 "compare": false, 00:16:16.018 "compare_and_write": false, 00:16:16.018 "abort": false, 00:16:16.018 "seek_hole": false, 00:16:16.018 "seek_data": false, 00:16:16.018 "copy": false, 00:16:16.018 "nvme_iov_md": false 00:16:16.018 }, 00:16:16.018 "driver_specific": { 00:16:16.018 "raid": { 00:16:16.018 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:16.018 "strip_size_kb": 64, 00:16:16.018 "state": "online", 00:16:16.018 "raid_level": "raid5f", 00:16:16.018 "superblock": true, 00:16:16.018 "num_base_bdevs": 4, 00:16:16.018 "num_base_bdevs_discovered": 4, 00:16:16.018 "num_base_bdevs_operational": 4, 00:16:16.018 "base_bdevs_list": [ 00:16:16.018 { 00:16:16.018 "name": "pt1", 00:16:16.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.018 "is_configured": true, 00:16:16.018 "data_offset": 2048, 00:16:16.018 "data_size": 63488 00:16:16.018 }, 00:16:16.018 { 00:16:16.018 "name": "pt2", 00:16:16.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.018 "is_configured": true, 00:16:16.018 "data_offset": 2048, 00:16:16.018 "data_size": 63488 00:16:16.018 }, 00:16:16.018 { 00:16:16.018 "name": "pt3", 00:16:16.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.018 "is_configured": true, 00:16:16.018 "data_offset": 2048, 00:16:16.018 "data_size": 63488 00:16:16.018 }, 00:16:16.018 { 00:16:16.018 "name": "pt4", 00:16:16.018 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.018 "is_configured": true, 00:16:16.018 "data_offset": 2048, 00:16:16.018 "data_size": 63488 00:16:16.018 } 00:16:16.018 ] 00:16:16.018 } 00:16:16.018 } 00:16:16.018 }' 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.018 pt2 00:16:16.018 pt3 00:16:16.018 pt4' 00:16:16.018 08:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 [2024-09-28 08:53:54.230027] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 78d6f8dc-59f7-4f93-bb37-b85f80295735 '!=' 78d6f8dc-59f7-4f93-bb37-b85f80295735 ']' 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.278 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.278 [2024-09-28 08:53:54.269839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.538 "name": "raid_bdev1", 00:16:16.538 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:16.538 "strip_size_kb": 64, 00:16:16.538 "state": "online", 00:16:16.538 "raid_level": "raid5f", 00:16:16.538 "superblock": true, 00:16:16.538 "num_base_bdevs": 4, 00:16:16.538 "num_base_bdevs_discovered": 3, 00:16:16.538 "num_base_bdevs_operational": 3, 00:16:16.538 "base_bdevs_list": [ 00:16:16.538 { 00:16:16.538 "name": null, 00:16:16.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.538 "is_configured": false, 00:16:16.538 "data_offset": 0, 00:16:16.538 "data_size": 63488 00:16:16.538 }, 00:16:16.538 { 00:16:16.538 "name": "pt2", 00:16:16.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.538 "is_configured": true, 00:16:16.538 "data_offset": 2048, 00:16:16.538 "data_size": 63488 00:16:16.538 }, 00:16:16.538 { 00:16:16.538 "name": "pt3", 00:16:16.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.538 "is_configured": true, 00:16:16.538 "data_offset": 2048, 00:16:16.538 "data_size": 63488 00:16:16.538 }, 00:16:16.538 { 00:16:16.538 "name": "pt4", 00:16:16.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.538 "is_configured": true, 00:16:16.538 "data_offset": 2048, 00:16:16.538 "data_size": 63488 00:16:16.538 } 00:16:16.538 ] 00:16:16.538 }' 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.538 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.798 [2024-09-28 08:53:54.713038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.798 [2024-09-28 08:53:54.713104] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.798 [2024-09-28 08:53:54.713169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.798 [2024-09-28 08:53:54.713260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.798 [2024-09-28 08:53:54.713318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.798 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.059 [2024-09-28 08:53:54.812862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.059 [2024-09-28 08:53:54.812905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.059 [2024-09-28 08:53:54.812922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:17.059 [2024-09-28 08:53:54.812930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.059 [2024-09-28 08:53:54.815223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.059 [2024-09-28 08:53:54.815258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.059 [2024-09-28 08:53:54.815322] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:17.059 [2024-09-28 08:53:54.815378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.059 pt2 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.059 "name": "raid_bdev1", 00:16:17.059 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:17.059 "strip_size_kb": 64, 00:16:17.059 "state": "configuring", 00:16:17.059 "raid_level": "raid5f", 00:16:17.059 "superblock": true, 00:16:17.059 "num_base_bdevs": 4, 00:16:17.059 "num_base_bdevs_discovered": 1, 00:16:17.059 "num_base_bdevs_operational": 3, 00:16:17.059 "base_bdevs_list": [ 00:16:17.059 { 00:16:17.059 "name": null, 00:16:17.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.059 "is_configured": false, 00:16:17.059 "data_offset": 2048, 00:16:17.059 "data_size": 63488 00:16:17.059 }, 00:16:17.059 { 00:16:17.059 "name": "pt2", 00:16:17.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.059 "is_configured": true, 00:16:17.059 "data_offset": 2048, 00:16:17.059 "data_size": 63488 00:16:17.059 }, 00:16:17.059 { 00:16:17.059 "name": null, 00:16:17.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.059 "is_configured": false, 00:16:17.059 "data_offset": 2048, 00:16:17.059 "data_size": 63488 00:16:17.059 }, 00:16:17.059 { 00:16:17.059 "name": null, 00:16:17.059 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.059 "is_configured": false, 00:16:17.059 "data_offset": 2048, 00:16:17.059 "data_size": 63488 00:16:17.059 } 00:16:17.059 ] 00:16:17.059 }' 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.059 08:53:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.319 [2024-09-28 08:53:55.268103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:17.319 [2024-09-28 08:53:55.268187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.319 [2024-09-28 08:53:55.268219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:17.319 [2024-09-28 08:53:55.268245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.319 [2024-09-28 08:53:55.268633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.319 [2024-09-28 08:53:55.268704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:17.319 [2024-09-28 08:53:55.268797] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:17.319 [2024-09-28 08:53:55.268855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.319 pt3 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.319 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.579 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.579 "name": "raid_bdev1", 00:16:17.579 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:17.579 "strip_size_kb": 64, 00:16:17.579 "state": "configuring", 00:16:17.579 "raid_level": "raid5f", 00:16:17.579 "superblock": true, 00:16:17.579 "num_base_bdevs": 4, 00:16:17.579 "num_base_bdevs_discovered": 2, 00:16:17.579 "num_base_bdevs_operational": 3, 00:16:17.579 "base_bdevs_list": [ 00:16:17.579 { 00:16:17.579 "name": null, 00:16:17.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.579 "is_configured": false, 00:16:17.579 "data_offset": 2048, 00:16:17.579 "data_size": 63488 00:16:17.579 }, 00:16:17.579 { 00:16:17.579 "name": "pt2", 00:16:17.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.579 "is_configured": true, 00:16:17.579 "data_offset": 2048, 00:16:17.579 "data_size": 63488 00:16:17.579 }, 00:16:17.579 { 00:16:17.579 "name": "pt3", 00:16:17.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.579 "is_configured": true, 00:16:17.579 "data_offset": 2048, 00:16:17.579 "data_size": 63488 00:16:17.579 }, 00:16:17.579 { 00:16:17.579 "name": null, 00:16:17.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.579 "is_configured": false, 00:16:17.579 "data_offset": 2048, 00:16:17.579 "data_size": 63488 00:16:17.579 } 00:16:17.579 ] 00:16:17.579 }' 00:16:17.579 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.579 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.838 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 [2024-09-28 08:53:55.679450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:17.839 [2024-09-28 08:53:55.679495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.839 [2024-09-28 08:53:55.679512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:17.839 [2024-09-28 08:53:55.679520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.839 [2024-09-28 08:53:55.679895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.839 [2024-09-28 08:53:55.679913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:17.839 [2024-09-28 08:53:55.679972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:17.839 [2024-09-28 08:53:55.679988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:17.839 [2024-09-28 08:53:55.680112] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:17.839 [2024-09-28 08:53:55.680120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.839 [2024-09-28 08:53:55.680355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:17.839 [2024-09-28 08:53:55.686933] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:17.839 [2024-09-28 08:53:55.686958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:17.839 [2024-09-28 08:53:55.687233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.839 pt4 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.839 "name": "raid_bdev1", 00:16:17.839 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:17.839 "strip_size_kb": 64, 00:16:17.839 "state": "online", 00:16:17.839 "raid_level": "raid5f", 00:16:17.839 "superblock": true, 00:16:17.839 "num_base_bdevs": 4, 00:16:17.839 "num_base_bdevs_discovered": 3, 00:16:17.839 "num_base_bdevs_operational": 3, 00:16:17.839 "base_bdevs_list": [ 00:16:17.839 { 00:16:17.839 "name": null, 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.839 "is_configured": false, 00:16:17.839 "data_offset": 2048, 00:16:17.839 "data_size": 63488 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "pt2", 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 2048, 00:16:17.839 "data_size": 63488 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "pt3", 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 2048, 00:16:17.839 "data_size": 63488 00:16:17.839 }, 00:16:17.839 { 00:16:17.839 "name": "pt4", 00:16:17.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.839 "is_configured": true, 00:16:17.839 "data_offset": 2048, 00:16:17.839 "data_size": 63488 00:16:17.839 } 00:16:17.839 ] 00:16:17.839 }' 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.839 08:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 [2024-09-28 08:53:56.131633] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.407 [2024-09-28 08:53:56.131713] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.407 [2024-09-28 08:53:56.131791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.407 [2024-09-28 08:53:56.131869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.407 [2024-09-28 08:53:56.131947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 [2024-09-28 08:53:56.203524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.407 [2024-09-28 08:53:56.203586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.407 [2024-09-28 08:53:56.203603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:18.407 [2024-09-28 08:53:56.203614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.407 [2024-09-28 08:53:56.206099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.407 [2024-09-28 08:53:56.206175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.407 [2024-09-28 08:53:56.206246] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.407 [2024-09-28 08:53:56.206302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.407 [2024-09-28 08:53:56.206442] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:18.407 [2024-09-28 08:53:56.206457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.407 [2024-09-28 08:53:56.206471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:18.407 [2024-09-28 08:53:56.206537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.407 [2024-09-28 08:53:56.206632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.407 pt1 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.407 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.407 "name": "raid_bdev1", 00:16:18.407 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:18.407 "strip_size_kb": 64, 00:16:18.407 "state": "configuring", 00:16:18.407 "raid_level": "raid5f", 00:16:18.407 "superblock": true, 00:16:18.407 "num_base_bdevs": 4, 00:16:18.407 "num_base_bdevs_discovered": 2, 00:16:18.407 "num_base_bdevs_operational": 3, 00:16:18.408 "base_bdevs_list": [ 00:16:18.408 { 00:16:18.408 "name": null, 00:16:18.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.408 "is_configured": false, 00:16:18.408 "data_offset": 2048, 00:16:18.408 "data_size": 63488 00:16:18.408 }, 00:16:18.408 { 00:16:18.408 "name": "pt2", 00:16:18.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.408 "is_configured": true, 00:16:18.408 "data_offset": 2048, 00:16:18.408 "data_size": 63488 00:16:18.408 }, 00:16:18.408 { 00:16:18.408 "name": "pt3", 00:16:18.408 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.408 "is_configured": true, 00:16:18.408 "data_offset": 2048, 00:16:18.408 "data_size": 63488 00:16:18.408 }, 00:16:18.408 { 00:16:18.408 "name": null, 00:16:18.408 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.408 "is_configured": false, 00:16:18.408 "data_offset": 2048, 00:16:18.408 "data_size": 63488 00:16:18.408 } 00:16:18.408 ] 00:16:18.408 }' 00:16:18.408 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.408 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.667 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:18.667 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:18.667 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.667 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.667 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.927 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:18.927 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:18.927 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.927 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.928 [2024-09-28 08:53:56.674786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:18.928 [2024-09-28 08:53:56.674871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.928 [2024-09-28 08:53:56.674909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:18.928 [2024-09-28 08:53:56.674936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.928 [2024-09-28 08:53:56.675364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.928 [2024-09-28 08:53:56.675437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:18.928 [2024-09-28 08:53:56.675531] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:18.928 [2024-09-28 08:53:56.675577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:18.928 [2024-09-28 08:53:56.675744] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:18.928 [2024-09-28 08:53:56.675783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.928 [2024-09-28 08:53:56.676061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:18.928 [2024-09-28 08:53:56.683106] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:18.928 [2024-09-28 08:53:56.683164] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:18.928 [2024-09-28 08:53:56.683479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.928 pt4 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.928 "name": "raid_bdev1", 00:16:18.928 "uuid": "78d6f8dc-59f7-4f93-bb37-b85f80295735", 00:16:18.928 "strip_size_kb": 64, 00:16:18.928 "state": "online", 00:16:18.928 "raid_level": "raid5f", 00:16:18.928 "superblock": true, 00:16:18.928 "num_base_bdevs": 4, 00:16:18.928 "num_base_bdevs_discovered": 3, 00:16:18.928 "num_base_bdevs_operational": 3, 00:16:18.928 "base_bdevs_list": [ 00:16:18.928 { 00:16:18.928 "name": null, 00:16:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.928 "is_configured": false, 00:16:18.928 "data_offset": 2048, 00:16:18.928 "data_size": 63488 00:16:18.928 }, 00:16:18.928 { 00:16:18.928 "name": "pt2", 00:16:18.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.928 "is_configured": true, 00:16:18.928 "data_offset": 2048, 00:16:18.928 "data_size": 63488 00:16:18.928 }, 00:16:18.928 { 00:16:18.928 "name": "pt3", 00:16:18.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.928 "is_configured": true, 00:16:18.928 "data_offset": 2048, 00:16:18.928 "data_size": 63488 00:16:18.928 }, 00:16:18.928 { 00:16:18.928 "name": "pt4", 00:16:18.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.928 "is_configured": true, 00:16:18.928 "data_offset": 2048, 00:16:18.928 "data_size": 63488 00:16:18.928 } 00:16:18.928 ] 00:16:18.928 }' 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.928 08:53:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:19.188 [2024-09-28 08:53:57.172260] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.188 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 78d6f8dc-59f7-4f93-bb37-b85f80295735 '!=' 78d6f8dc-59f7-4f93-bb37-b85f80295735 ']' 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84123 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84123 ']' 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84123 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84123 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.448 killing process with pid 84123 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84123' 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84123 00:16:19.448 [2024-09-28 08:53:57.264906] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.448 [2024-09-28 08:53:57.264979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.448 [2024-09-28 08:53:57.265044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.448 [2024-09-28 08:53:57.265056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:19.448 08:53:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84123 00:16:19.708 [2024-09-28 08:53:57.680331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.090 08:53:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:21.090 ************************************ 00:16:21.090 END TEST raid5f_superblock_test 00:16:21.090 ************************************ 00:16:21.090 00:16:21.090 real 0m8.741s 00:16:21.090 user 0m13.412s 00:16:21.090 sys 0m1.735s 00:16:21.090 08:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.090 08:53:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.090 08:53:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:21.090 08:53:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:21.090 08:53:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:21.090 08:53:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.090 08:53:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.350 ************************************ 00:16:21.350 START TEST raid5f_rebuild_test 00:16:21.350 ************************************ 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.350 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84609 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84609 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84609 ']' 00:16:21.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.351 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.351 [2024-09-28 08:53:59.198254] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:21.351 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.351 Zero copy mechanism will not be used. 00:16:21.351 [2024-09-28 08:53:59.198467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84609 ] 00:16:21.611 [2024-09-28 08:53:59.361344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.870 [2024-09-28 08:53:59.607022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.870 [2024-09-28 08:53:59.846129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.870 [2024-09-28 08:53:59.846167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.133 08:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.133 BaseBdev1_malloc 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.133 [2024-09-28 08:54:00.048778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.133 [2024-09-28 08:54:00.048854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.133 [2024-09-28 08:54:00.048880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.133 [2024-09-28 08:54:00.048897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.133 [2024-09-28 08:54:00.051232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.133 [2024-09-28 08:54:00.051347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.133 BaseBdev1 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.133 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 BaseBdev2_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 [2024-09-28 08:54:00.135687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:22.393 [2024-09-28 08:54:00.135811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.393 [2024-09-28 08:54:00.135836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:22.393 [2024-09-28 08:54:00.135851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.393 [2024-09-28 08:54:00.138181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.393 [2024-09-28 08:54:00.138217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.393 BaseBdev2 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 BaseBdev3_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 [2024-09-28 08:54:00.194353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:22.393 [2024-09-28 08:54:00.194404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.393 [2024-09-28 08:54:00.194426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:22.393 [2024-09-28 08:54:00.194438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.393 [2024-09-28 08:54:00.196731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.393 [2024-09-28 08:54:00.196765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:22.393 BaseBdev3 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 BaseBdev4_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 [2024-09-28 08:54:00.250177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:22.393 [2024-09-28 08:54:00.250227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.393 [2024-09-28 08:54:00.250246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:22.393 [2024-09-28 08:54:00.250258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.393 [2024-09-28 08:54:00.252529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.393 [2024-09-28 08:54:00.252568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:22.393 BaseBdev4 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.393 spare_malloc 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:22.393 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.394 spare_delay 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.394 [2024-09-28 08:54:00.323035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.394 [2024-09-28 08:54:00.323094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.394 [2024-09-28 08:54:00.323113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:22.394 [2024-09-28 08:54:00.323124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.394 [2024-09-28 08:54:00.325411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.394 [2024-09-28 08:54:00.325457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.394 spare 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.394 [2024-09-28 08:54:00.335080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.394 [2024-09-28 08:54:00.337154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.394 [2024-09-28 08:54:00.337215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.394 [2024-09-28 08:54:00.337262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.394 [2024-09-28 08:54:00.337343] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:22.394 [2024-09-28 08:54:00.337354] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:22.394 [2024-09-28 08:54:00.337597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:22.394 [2024-09-28 08:54:00.344840] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:22.394 [2024-09-28 08:54:00.344861] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:22.394 [2024-09-28 08:54:00.345041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.394 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.654 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.654 "name": "raid_bdev1", 00:16:22.654 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:22.654 "strip_size_kb": 64, 00:16:22.654 "state": "online", 00:16:22.654 "raid_level": "raid5f", 00:16:22.654 "superblock": false, 00:16:22.654 "num_base_bdevs": 4, 00:16:22.654 "num_base_bdevs_discovered": 4, 00:16:22.654 "num_base_bdevs_operational": 4, 00:16:22.654 "base_bdevs_list": [ 00:16:22.654 { 00:16:22.654 "name": "BaseBdev1", 00:16:22.654 "uuid": "308e66f8-ac23-59d8-9964-d3fddc93dbc4", 00:16:22.654 "is_configured": true, 00:16:22.654 "data_offset": 0, 00:16:22.654 "data_size": 65536 00:16:22.654 }, 00:16:22.654 { 00:16:22.654 "name": "BaseBdev2", 00:16:22.654 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:22.654 "is_configured": true, 00:16:22.654 "data_offset": 0, 00:16:22.654 "data_size": 65536 00:16:22.654 }, 00:16:22.654 { 00:16:22.654 "name": "BaseBdev3", 00:16:22.654 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:22.654 "is_configured": true, 00:16:22.654 "data_offset": 0, 00:16:22.654 "data_size": 65536 00:16:22.654 }, 00:16:22.654 { 00:16:22.654 "name": "BaseBdev4", 00:16:22.654 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:22.654 "is_configured": true, 00:16:22.654 "data_offset": 0, 00:16:22.654 "data_size": 65536 00:16:22.654 } 00:16:22.654 ] 00:16:22.654 }' 00:16:22.654 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.654 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.914 [2024-09-28 08:54:00.789438] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:22.914 08:54:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:23.174 [2024-09-28 08:54:01.044861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:23.174 /dev/nbd0 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.174 1+0 records in 00:16:23.174 1+0 records out 00:16:23.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031443 s, 13.0 MB/s 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:23.174 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:24.113 512+0 records in 00:16:24.113 512+0 records out 00:16:24.113 100663296 bytes (101 MB, 96 MiB) copied, 0.829884 s, 121 MB/s 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.113 08:54:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.372 [2024-09-28 08:54:02.163983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:24.372 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.373 [2024-09-28 08:54:02.173683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.373 "name": "raid_bdev1", 00:16:24.373 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:24.373 "strip_size_kb": 64, 00:16:24.373 "state": "online", 00:16:24.373 "raid_level": "raid5f", 00:16:24.373 "superblock": false, 00:16:24.373 "num_base_bdevs": 4, 00:16:24.373 "num_base_bdevs_discovered": 3, 00:16:24.373 "num_base_bdevs_operational": 3, 00:16:24.373 "base_bdevs_list": [ 00:16:24.373 { 00:16:24.373 "name": null, 00:16:24.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.373 "is_configured": false, 00:16:24.373 "data_offset": 0, 00:16:24.373 "data_size": 65536 00:16:24.373 }, 00:16:24.373 { 00:16:24.373 "name": "BaseBdev2", 00:16:24.373 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:24.373 "is_configured": true, 00:16:24.373 "data_offset": 0, 00:16:24.373 "data_size": 65536 00:16:24.373 }, 00:16:24.373 { 00:16:24.373 "name": "BaseBdev3", 00:16:24.373 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:24.373 "is_configured": true, 00:16:24.373 "data_offset": 0, 00:16:24.373 "data_size": 65536 00:16:24.373 }, 00:16:24.373 { 00:16:24.373 "name": "BaseBdev4", 00:16:24.373 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:24.373 "is_configured": true, 00:16:24.373 "data_offset": 0, 00:16:24.373 "data_size": 65536 00:16:24.373 } 00:16:24.373 ] 00:16:24.373 }' 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.373 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.633 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:24.633 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.633 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.633 [2024-09-28 08:54:02.612867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.892 [2024-09-28 08:54:02.627945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:24.892 08:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.892 08:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:24.892 [2024-09-28 08:54:02.637069] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.832 "name": "raid_bdev1", 00:16:25.832 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:25.832 "strip_size_kb": 64, 00:16:25.832 "state": "online", 00:16:25.832 "raid_level": "raid5f", 00:16:25.832 "superblock": false, 00:16:25.832 "num_base_bdevs": 4, 00:16:25.832 "num_base_bdevs_discovered": 4, 00:16:25.832 "num_base_bdevs_operational": 4, 00:16:25.832 "process": { 00:16:25.832 "type": "rebuild", 00:16:25.832 "target": "spare", 00:16:25.832 "progress": { 00:16:25.832 "blocks": 19200, 00:16:25.832 "percent": 9 00:16:25.832 } 00:16:25.832 }, 00:16:25.832 "base_bdevs_list": [ 00:16:25.832 { 00:16:25.832 "name": "spare", 00:16:25.832 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:25.832 "is_configured": true, 00:16:25.832 "data_offset": 0, 00:16:25.832 "data_size": 65536 00:16:25.832 }, 00:16:25.832 { 00:16:25.832 "name": "BaseBdev2", 00:16:25.832 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:25.832 "is_configured": true, 00:16:25.832 "data_offset": 0, 00:16:25.832 "data_size": 65536 00:16:25.832 }, 00:16:25.832 { 00:16:25.832 "name": "BaseBdev3", 00:16:25.832 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:25.832 "is_configured": true, 00:16:25.832 "data_offset": 0, 00:16:25.832 "data_size": 65536 00:16:25.832 }, 00:16:25.832 { 00:16:25.832 "name": "BaseBdev4", 00:16:25.832 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:25.832 "is_configured": true, 00:16:25.832 "data_offset": 0, 00:16:25.832 "data_size": 65536 00:16:25.832 } 00:16:25.832 ] 00:16:25.832 }' 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.832 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.832 [2024-09-28 08:54:03.787938] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.092 [2024-09-28 08:54:03.843836] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.092 [2024-09-28 08:54:03.843896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.092 [2024-09-28 08:54:03.843914] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.092 [2024-09-28 08:54:03.843925] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.092 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.092 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.093 "name": "raid_bdev1", 00:16:26.093 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:26.093 "strip_size_kb": 64, 00:16:26.093 "state": "online", 00:16:26.093 "raid_level": "raid5f", 00:16:26.093 "superblock": false, 00:16:26.093 "num_base_bdevs": 4, 00:16:26.093 "num_base_bdevs_discovered": 3, 00:16:26.093 "num_base_bdevs_operational": 3, 00:16:26.093 "base_bdevs_list": [ 00:16:26.093 { 00:16:26.093 "name": null, 00:16:26.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.093 "is_configured": false, 00:16:26.093 "data_offset": 0, 00:16:26.093 "data_size": 65536 00:16:26.093 }, 00:16:26.093 { 00:16:26.093 "name": "BaseBdev2", 00:16:26.093 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:26.093 "is_configured": true, 00:16:26.093 "data_offset": 0, 00:16:26.093 "data_size": 65536 00:16:26.093 }, 00:16:26.093 { 00:16:26.093 "name": "BaseBdev3", 00:16:26.093 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:26.093 "is_configured": true, 00:16:26.093 "data_offset": 0, 00:16:26.093 "data_size": 65536 00:16:26.093 }, 00:16:26.093 { 00:16:26.093 "name": "BaseBdev4", 00:16:26.093 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:26.093 "is_configured": true, 00:16:26.093 "data_offset": 0, 00:16:26.093 "data_size": 65536 00:16:26.093 } 00:16:26.093 ] 00:16:26.093 }' 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.093 08:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.353 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.353 "name": "raid_bdev1", 00:16:26.353 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:26.353 "strip_size_kb": 64, 00:16:26.353 "state": "online", 00:16:26.353 "raid_level": "raid5f", 00:16:26.353 "superblock": false, 00:16:26.353 "num_base_bdevs": 4, 00:16:26.353 "num_base_bdevs_discovered": 3, 00:16:26.353 "num_base_bdevs_operational": 3, 00:16:26.353 "base_bdevs_list": [ 00:16:26.353 { 00:16:26.353 "name": null, 00:16:26.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.353 "is_configured": false, 00:16:26.353 "data_offset": 0, 00:16:26.353 "data_size": 65536 00:16:26.353 }, 00:16:26.353 { 00:16:26.353 "name": "BaseBdev2", 00:16:26.353 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:26.353 "is_configured": true, 00:16:26.353 "data_offset": 0, 00:16:26.353 "data_size": 65536 00:16:26.353 }, 00:16:26.353 { 00:16:26.353 "name": "BaseBdev3", 00:16:26.353 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:26.353 "is_configured": true, 00:16:26.353 "data_offset": 0, 00:16:26.353 "data_size": 65536 00:16:26.353 }, 00:16:26.353 { 00:16:26.353 "name": "BaseBdev4", 00:16:26.353 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:26.353 "is_configured": true, 00:16:26.353 "data_offset": 0, 00:16:26.353 "data_size": 65536 00:16:26.353 } 00:16:26.353 ] 00:16:26.353 }' 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.613 [2024-09-28 08:54:04.449155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.613 [2024-09-28 08:54:04.462466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.613 08:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:26.613 [2024-09-28 08:54:04.470990] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.553 "name": "raid_bdev1", 00:16:27.553 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:27.553 "strip_size_kb": 64, 00:16:27.553 "state": "online", 00:16:27.553 "raid_level": "raid5f", 00:16:27.553 "superblock": false, 00:16:27.553 "num_base_bdevs": 4, 00:16:27.553 "num_base_bdevs_discovered": 4, 00:16:27.553 "num_base_bdevs_operational": 4, 00:16:27.553 "process": { 00:16:27.553 "type": "rebuild", 00:16:27.553 "target": "spare", 00:16:27.553 "progress": { 00:16:27.553 "blocks": 19200, 00:16:27.553 "percent": 9 00:16:27.553 } 00:16:27.553 }, 00:16:27.553 "base_bdevs_list": [ 00:16:27.553 { 00:16:27.553 "name": "spare", 00:16:27.553 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:27.553 "is_configured": true, 00:16:27.553 "data_offset": 0, 00:16:27.553 "data_size": 65536 00:16:27.553 }, 00:16:27.553 { 00:16:27.553 "name": "BaseBdev2", 00:16:27.553 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:27.553 "is_configured": true, 00:16:27.553 "data_offset": 0, 00:16:27.553 "data_size": 65536 00:16:27.553 }, 00:16:27.553 { 00:16:27.553 "name": "BaseBdev3", 00:16:27.553 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:27.553 "is_configured": true, 00:16:27.553 "data_offset": 0, 00:16:27.553 "data_size": 65536 00:16:27.553 }, 00:16:27.553 { 00:16:27.553 "name": "BaseBdev4", 00:16:27.553 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:27.553 "is_configured": true, 00:16:27.553 "data_offset": 0, 00:16:27.553 "data_size": 65536 00:16:27.553 } 00:16:27.553 ] 00:16:27.553 }' 00:16:27.553 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.813 "name": "raid_bdev1", 00:16:27.813 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:27.813 "strip_size_kb": 64, 00:16:27.813 "state": "online", 00:16:27.813 "raid_level": "raid5f", 00:16:27.813 "superblock": false, 00:16:27.813 "num_base_bdevs": 4, 00:16:27.813 "num_base_bdevs_discovered": 4, 00:16:27.813 "num_base_bdevs_operational": 4, 00:16:27.813 "process": { 00:16:27.813 "type": "rebuild", 00:16:27.813 "target": "spare", 00:16:27.813 "progress": { 00:16:27.813 "blocks": 21120, 00:16:27.813 "percent": 10 00:16:27.813 } 00:16:27.813 }, 00:16:27.813 "base_bdevs_list": [ 00:16:27.813 { 00:16:27.813 "name": "spare", 00:16:27.813 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:27.813 "is_configured": true, 00:16:27.813 "data_offset": 0, 00:16:27.813 "data_size": 65536 00:16:27.813 }, 00:16:27.813 { 00:16:27.813 "name": "BaseBdev2", 00:16:27.813 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:27.813 "is_configured": true, 00:16:27.813 "data_offset": 0, 00:16:27.813 "data_size": 65536 00:16:27.813 }, 00:16:27.813 { 00:16:27.813 "name": "BaseBdev3", 00:16:27.813 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:27.813 "is_configured": true, 00:16:27.813 "data_offset": 0, 00:16:27.813 "data_size": 65536 00:16:27.813 }, 00:16:27.813 { 00:16:27.813 "name": "BaseBdev4", 00:16:27.813 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:27.813 "is_configured": true, 00:16:27.813 "data_offset": 0, 00:16:27.813 "data_size": 65536 00:16:27.813 } 00:16:27.813 ] 00:16:27.813 }' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.813 08:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.194 "name": "raid_bdev1", 00:16:29.194 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:29.194 "strip_size_kb": 64, 00:16:29.194 "state": "online", 00:16:29.194 "raid_level": "raid5f", 00:16:29.194 "superblock": false, 00:16:29.194 "num_base_bdevs": 4, 00:16:29.194 "num_base_bdevs_discovered": 4, 00:16:29.194 "num_base_bdevs_operational": 4, 00:16:29.194 "process": { 00:16:29.194 "type": "rebuild", 00:16:29.194 "target": "spare", 00:16:29.194 "progress": { 00:16:29.194 "blocks": 42240, 00:16:29.194 "percent": 21 00:16:29.194 } 00:16:29.194 }, 00:16:29.194 "base_bdevs_list": [ 00:16:29.194 { 00:16:29.194 "name": "spare", 00:16:29.194 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 0, 00:16:29.194 "data_size": 65536 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "name": "BaseBdev2", 00:16:29.194 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 0, 00:16:29.194 "data_size": 65536 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "name": "BaseBdev3", 00:16:29.194 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 0, 00:16:29.194 "data_size": 65536 00:16:29.194 }, 00:16:29.194 { 00:16:29.194 "name": "BaseBdev4", 00:16:29.194 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:29.194 "is_configured": true, 00:16:29.194 "data_offset": 0, 00:16:29.194 "data_size": 65536 00:16:29.194 } 00:16:29.194 ] 00:16:29.194 }' 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.194 08:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.135 "name": "raid_bdev1", 00:16:30.135 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:30.135 "strip_size_kb": 64, 00:16:30.135 "state": "online", 00:16:30.135 "raid_level": "raid5f", 00:16:30.135 "superblock": false, 00:16:30.135 "num_base_bdevs": 4, 00:16:30.135 "num_base_bdevs_discovered": 4, 00:16:30.135 "num_base_bdevs_operational": 4, 00:16:30.135 "process": { 00:16:30.135 "type": "rebuild", 00:16:30.135 "target": "spare", 00:16:30.135 "progress": { 00:16:30.135 "blocks": 65280, 00:16:30.135 "percent": 33 00:16:30.135 } 00:16:30.135 }, 00:16:30.135 "base_bdevs_list": [ 00:16:30.135 { 00:16:30.135 "name": "spare", 00:16:30.135 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:30.135 "is_configured": true, 00:16:30.135 "data_offset": 0, 00:16:30.135 "data_size": 65536 00:16:30.135 }, 00:16:30.135 { 00:16:30.135 "name": "BaseBdev2", 00:16:30.135 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:30.135 "is_configured": true, 00:16:30.135 "data_offset": 0, 00:16:30.135 "data_size": 65536 00:16:30.135 }, 00:16:30.135 { 00:16:30.135 "name": "BaseBdev3", 00:16:30.135 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:30.135 "is_configured": true, 00:16:30.135 "data_offset": 0, 00:16:30.135 "data_size": 65536 00:16:30.135 }, 00:16:30.135 { 00:16:30.135 "name": "BaseBdev4", 00:16:30.135 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:30.135 "is_configured": true, 00:16:30.135 "data_offset": 0, 00:16:30.135 "data_size": 65536 00:16:30.135 } 00:16:30.135 ] 00:16:30.135 }' 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.135 08:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.135 08:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.135 08:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.074 08:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.333 "name": "raid_bdev1", 00:16:31.333 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:31.333 "strip_size_kb": 64, 00:16:31.333 "state": "online", 00:16:31.333 "raid_level": "raid5f", 00:16:31.333 "superblock": false, 00:16:31.333 "num_base_bdevs": 4, 00:16:31.333 "num_base_bdevs_discovered": 4, 00:16:31.333 "num_base_bdevs_operational": 4, 00:16:31.333 "process": { 00:16:31.333 "type": "rebuild", 00:16:31.333 "target": "spare", 00:16:31.333 "progress": { 00:16:31.333 "blocks": 86400, 00:16:31.333 "percent": 43 00:16:31.333 } 00:16:31.333 }, 00:16:31.333 "base_bdevs_list": [ 00:16:31.333 { 00:16:31.333 "name": "spare", 00:16:31.333 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:31.333 "is_configured": true, 00:16:31.333 "data_offset": 0, 00:16:31.333 "data_size": 65536 00:16:31.333 }, 00:16:31.333 { 00:16:31.333 "name": "BaseBdev2", 00:16:31.333 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:31.333 "is_configured": true, 00:16:31.333 "data_offset": 0, 00:16:31.333 "data_size": 65536 00:16:31.333 }, 00:16:31.333 { 00:16:31.333 "name": "BaseBdev3", 00:16:31.333 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:31.333 "is_configured": true, 00:16:31.333 "data_offset": 0, 00:16:31.333 "data_size": 65536 00:16:31.333 }, 00:16:31.333 { 00:16:31.333 "name": "BaseBdev4", 00:16:31.333 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:31.333 "is_configured": true, 00:16:31.333 "data_offset": 0, 00:16:31.333 "data_size": 65536 00:16:31.333 } 00:16:31.333 ] 00:16:31.333 }' 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.333 08:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.272 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.272 "name": "raid_bdev1", 00:16:32.272 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:32.272 "strip_size_kb": 64, 00:16:32.272 "state": "online", 00:16:32.272 "raid_level": "raid5f", 00:16:32.272 "superblock": false, 00:16:32.272 "num_base_bdevs": 4, 00:16:32.272 "num_base_bdevs_discovered": 4, 00:16:32.272 "num_base_bdevs_operational": 4, 00:16:32.272 "process": { 00:16:32.272 "type": "rebuild", 00:16:32.272 "target": "spare", 00:16:32.272 "progress": { 00:16:32.272 "blocks": 107520, 00:16:32.272 "percent": 54 00:16:32.272 } 00:16:32.272 }, 00:16:32.272 "base_bdevs_list": [ 00:16:32.272 { 00:16:32.272 "name": "spare", 00:16:32.272 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:32.272 "is_configured": true, 00:16:32.272 "data_offset": 0, 00:16:32.272 "data_size": 65536 00:16:32.272 }, 00:16:32.272 { 00:16:32.272 "name": "BaseBdev2", 00:16:32.272 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:32.272 "is_configured": true, 00:16:32.272 "data_offset": 0, 00:16:32.272 "data_size": 65536 00:16:32.273 }, 00:16:32.273 { 00:16:32.273 "name": "BaseBdev3", 00:16:32.273 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:32.273 "is_configured": true, 00:16:32.273 "data_offset": 0, 00:16:32.273 "data_size": 65536 00:16:32.273 }, 00:16:32.273 { 00:16:32.273 "name": "BaseBdev4", 00:16:32.273 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:32.273 "is_configured": true, 00:16:32.273 "data_offset": 0, 00:16:32.273 "data_size": 65536 00:16:32.273 } 00:16:32.273 ] 00:16:32.273 }' 00:16:32.273 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.273 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.273 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.532 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.532 08:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.484 "name": "raid_bdev1", 00:16:33.484 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:33.484 "strip_size_kb": 64, 00:16:33.484 "state": "online", 00:16:33.484 "raid_level": "raid5f", 00:16:33.484 "superblock": false, 00:16:33.484 "num_base_bdevs": 4, 00:16:33.484 "num_base_bdevs_discovered": 4, 00:16:33.484 "num_base_bdevs_operational": 4, 00:16:33.484 "process": { 00:16:33.484 "type": "rebuild", 00:16:33.484 "target": "spare", 00:16:33.484 "progress": { 00:16:33.484 "blocks": 130560, 00:16:33.484 "percent": 66 00:16:33.484 } 00:16:33.484 }, 00:16:33.484 "base_bdevs_list": [ 00:16:33.484 { 00:16:33.484 "name": "spare", 00:16:33.484 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:33.484 "is_configured": true, 00:16:33.484 "data_offset": 0, 00:16:33.484 "data_size": 65536 00:16:33.484 }, 00:16:33.484 { 00:16:33.484 "name": "BaseBdev2", 00:16:33.484 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:33.484 "is_configured": true, 00:16:33.484 "data_offset": 0, 00:16:33.484 "data_size": 65536 00:16:33.484 }, 00:16:33.484 { 00:16:33.484 "name": "BaseBdev3", 00:16:33.484 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:33.484 "is_configured": true, 00:16:33.484 "data_offset": 0, 00:16:33.484 "data_size": 65536 00:16:33.484 }, 00:16:33.484 { 00:16:33.484 "name": "BaseBdev4", 00:16:33.484 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:33.484 "is_configured": true, 00:16:33.484 "data_offset": 0, 00:16:33.484 "data_size": 65536 00:16:33.484 } 00:16:33.484 ] 00:16:33.484 }' 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.484 08:54:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.902 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.902 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.902 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.902 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.903 "name": "raid_bdev1", 00:16:34.903 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:34.903 "strip_size_kb": 64, 00:16:34.903 "state": "online", 00:16:34.903 "raid_level": "raid5f", 00:16:34.903 "superblock": false, 00:16:34.903 "num_base_bdevs": 4, 00:16:34.903 "num_base_bdevs_discovered": 4, 00:16:34.903 "num_base_bdevs_operational": 4, 00:16:34.903 "process": { 00:16:34.903 "type": "rebuild", 00:16:34.903 "target": "spare", 00:16:34.903 "progress": { 00:16:34.903 "blocks": 151680, 00:16:34.903 "percent": 77 00:16:34.903 } 00:16:34.903 }, 00:16:34.903 "base_bdevs_list": [ 00:16:34.903 { 00:16:34.903 "name": "spare", 00:16:34.903 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:34.903 "is_configured": true, 00:16:34.903 "data_offset": 0, 00:16:34.903 "data_size": 65536 00:16:34.903 }, 00:16:34.903 { 00:16:34.903 "name": "BaseBdev2", 00:16:34.903 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:34.903 "is_configured": true, 00:16:34.903 "data_offset": 0, 00:16:34.903 "data_size": 65536 00:16:34.903 }, 00:16:34.903 { 00:16:34.903 "name": "BaseBdev3", 00:16:34.903 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:34.903 "is_configured": true, 00:16:34.903 "data_offset": 0, 00:16:34.903 "data_size": 65536 00:16:34.903 }, 00:16:34.903 { 00:16:34.903 "name": "BaseBdev4", 00:16:34.903 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:34.903 "is_configured": true, 00:16:34.903 "data_offset": 0, 00:16:34.903 "data_size": 65536 00:16:34.903 } 00:16:34.903 ] 00:16:34.903 }' 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.903 08:54:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.841 "name": "raid_bdev1", 00:16:35.841 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:35.841 "strip_size_kb": 64, 00:16:35.841 "state": "online", 00:16:35.841 "raid_level": "raid5f", 00:16:35.841 "superblock": false, 00:16:35.841 "num_base_bdevs": 4, 00:16:35.841 "num_base_bdevs_discovered": 4, 00:16:35.841 "num_base_bdevs_operational": 4, 00:16:35.841 "process": { 00:16:35.841 "type": "rebuild", 00:16:35.841 "target": "spare", 00:16:35.841 "progress": { 00:16:35.841 "blocks": 174720, 00:16:35.841 "percent": 88 00:16:35.841 } 00:16:35.841 }, 00:16:35.841 "base_bdevs_list": [ 00:16:35.841 { 00:16:35.841 "name": "spare", 00:16:35.841 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev2", 00:16:35.841 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev3", 00:16:35.841 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 }, 00:16:35.841 { 00:16:35.841 "name": "BaseBdev4", 00:16:35.841 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:35.841 "is_configured": true, 00:16:35.841 "data_offset": 0, 00:16:35.841 "data_size": 65536 00:16:35.841 } 00:16:35.841 ] 00:16:35.841 }' 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.841 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.842 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.842 08:54:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.779 08:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.039 "name": "raid_bdev1", 00:16:37.039 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:37.039 "strip_size_kb": 64, 00:16:37.039 "state": "online", 00:16:37.039 "raid_level": "raid5f", 00:16:37.039 "superblock": false, 00:16:37.039 "num_base_bdevs": 4, 00:16:37.039 "num_base_bdevs_discovered": 4, 00:16:37.039 "num_base_bdevs_operational": 4, 00:16:37.039 "process": { 00:16:37.039 "type": "rebuild", 00:16:37.039 "target": "spare", 00:16:37.039 "progress": { 00:16:37.039 "blocks": 195840, 00:16:37.039 "percent": 99 00:16:37.039 } 00:16:37.039 }, 00:16:37.039 "base_bdevs_list": [ 00:16:37.039 { 00:16:37.039 "name": "spare", 00:16:37.039 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:37.039 "is_configured": true, 00:16:37.039 "data_offset": 0, 00:16:37.039 "data_size": 65536 00:16:37.039 }, 00:16:37.039 { 00:16:37.039 "name": "BaseBdev2", 00:16:37.039 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:37.039 "is_configured": true, 00:16:37.039 "data_offset": 0, 00:16:37.039 "data_size": 65536 00:16:37.039 }, 00:16:37.039 { 00:16:37.039 "name": "BaseBdev3", 00:16:37.039 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:37.039 "is_configured": true, 00:16:37.039 "data_offset": 0, 00:16:37.039 "data_size": 65536 00:16:37.039 }, 00:16:37.039 { 00:16:37.039 "name": "BaseBdev4", 00:16:37.039 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:37.039 "is_configured": true, 00:16:37.039 "data_offset": 0, 00:16:37.039 "data_size": 65536 00:16:37.039 } 00:16:37.039 ] 00:16:37.039 }' 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.039 [2024-09-28 08:54:14.821585] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:37.039 [2024-09-28 08:54:14.821719] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:37.039 [2024-09-28 08:54:14.821792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.039 08:54:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.978 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.978 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.978 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.978 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.979 "name": "raid_bdev1", 00:16:37.979 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:37.979 "strip_size_kb": 64, 00:16:37.979 "state": "online", 00:16:37.979 "raid_level": "raid5f", 00:16:37.979 "superblock": false, 00:16:37.979 "num_base_bdevs": 4, 00:16:37.979 "num_base_bdevs_discovered": 4, 00:16:37.979 "num_base_bdevs_operational": 4, 00:16:37.979 "base_bdevs_list": [ 00:16:37.979 { 00:16:37.979 "name": "spare", 00:16:37.979 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:37.979 "is_configured": true, 00:16:37.979 "data_offset": 0, 00:16:37.979 "data_size": 65536 00:16:37.979 }, 00:16:37.979 { 00:16:37.979 "name": "BaseBdev2", 00:16:37.979 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:37.979 "is_configured": true, 00:16:37.979 "data_offset": 0, 00:16:37.979 "data_size": 65536 00:16:37.979 }, 00:16:37.979 { 00:16:37.979 "name": "BaseBdev3", 00:16:37.979 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:37.979 "is_configured": true, 00:16:37.979 "data_offset": 0, 00:16:37.979 "data_size": 65536 00:16:37.979 }, 00:16:37.979 { 00:16:37.979 "name": "BaseBdev4", 00:16:37.979 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:37.979 "is_configured": true, 00:16:37.979 "data_offset": 0, 00:16:37.979 "data_size": 65536 00:16:37.979 } 00:16:37.979 ] 00:16:37.979 }' 00:16:37.979 08:54:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.243 "name": "raid_bdev1", 00:16:38.243 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:38.243 "strip_size_kb": 64, 00:16:38.243 "state": "online", 00:16:38.243 "raid_level": "raid5f", 00:16:38.243 "superblock": false, 00:16:38.243 "num_base_bdevs": 4, 00:16:38.243 "num_base_bdevs_discovered": 4, 00:16:38.243 "num_base_bdevs_operational": 4, 00:16:38.243 "base_bdevs_list": [ 00:16:38.243 { 00:16:38.243 "name": "spare", 00:16:38.243 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:38.243 "is_configured": true, 00:16:38.243 "data_offset": 0, 00:16:38.243 "data_size": 65536 00:16:38.243 }, 00:16:38.243 { 00:16:38.243 "name": "BaseBdev2", 00:16:38.243 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:38.243 "is_configured": true, 00:16:38.243 "data_offset": 0, 00:16:38.243 "data_size": 65536 00:16:38.243 }, 00:16:38.243 { 00:16:38.243 "name": "BaseBdev3", 00:16:38.243 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:38.243 "is_configured": true, 00:16:38.243 "data_offset": 0, 00:16:38.243 "data_size": 65536 00:16:38.243 }, 00:16:38.243 { 00:16:38.243 "name": "BaseBdev4", 00:16:38.243 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:38.243 "is_configured": true, 00:16:38.243 "data_offset": 0, 00:16:38.243 "data_size": 65536 00:16:38.243 } 00:16:38.243 ] 00:16:38.243 }' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.243 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.244 "name": "raid_bdev1", 00:16:38.244 "uuid": "f8251c18-884d-4838-94a4-199fbdbc1490", 00:16:38.244 "strip_size_kb": 64, 00:16:38.244 "state": "online", 00:16:38.244 "raid_level": "raid5f", 00:16:38.244 "superblock": false, 00:16:38.244 "num_base_bdevs": 4, 00:16:38.244 "num_base_bdevs_discovered": 4, 00:16:38.244 "num_base_bdevs_operational": 4, 00:16:38.244 "base_bdevs_list": [ 00:16:38.244 { 00:16:38.244 "name": "spare", 00:16:38.244 "uuid": "3aa2f3d0-42a5-530d-8196-40e125f54ed2", 00:16:38.244 "is_configured": true, 00:16:38.244 "data_offset": 0, 00:16:38.244 "data_size": 65536 00:16:38.244 }, 00:16:38.244 { 00:16:38.244 "name": "BaseBdev2", 00:16:38.244 "uuid": "44895726-154b-5f95-b0ec-a3c631837dd6", 00:16:38.244 "is_configured": true, 00:16:38.244 "data_offset": 0, 00:16:38.244 "data_size": 65536 00:16:38.244 }, 00:16:38.244 { 00:16:38.244 "name": "BaseBdev3", 00:16:38.244 "uuid": "6d1c2cb3-be42-5153-8fa9-24560bcf028b", 00:16:38.244 "is_configured": true, 00:16:38.244 "data_offset": 0, 00:16:38.244 "data_size": 65536 00:16:38.244 }, 00:16:38.244 { 00:16:38.244 "name": "BaseBdev4", 00:16:38.244 "uuid": "51237b75-89cc-5438-9ddc-4255b4798c6f", 00:16:38.244 "is_configured": true, 00:16:38.244 "data_offset": 0, 00:16:38.244 "data_size": 65536 00:16:38.244 } 00:16:38.244 ] 00:16:38.244 }' 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.244 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.812 [2024-09-28 08:54:16.605975] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.812 [2024-09-28 08:54:16.606047] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.812 [2024-09-28 08:54:16.606152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.812 [2024-09-28 08:54:16.606278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.812 [2024-09-28 08:54:16.606327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.812 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:39.071 /dev/nbd0 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.071 1+0 records in 00:16:39.071 1+0 records out 00:16:39.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540194 s, 7.6 MB/s 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.071 08:54:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:39.330 /dev/nbd1 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:39.330 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:39.331 1+0 records in 00:16:39.331 1+0 records out 00:16:39.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528414 s, 7.8 MB/s 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.331 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.590 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84609 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84609 ']' 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84609 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84609 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.850 killing process with pid 84609 00:16:39.850 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.850 00:16:39.850 Latency(us) 00:16:39.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.850 =================================================================================================================== 00:16:39.850 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84609' 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84609 00:16:39.850 [2024-09-28 08:54:17.788980] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.850 08:54:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84609 00:16:40.418 [2024-09-28 08:54:18.304787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:41.801 00:16:41.801 real 0m20.535s 00:16:41.801 user 0m23.930s 00:16:41.801 sys 0m2.914s 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.801 ************************************ 00:16:41.801 END TEST raid5f_rebuild_test 00:16:41.801 ************************************ 00:16:41.801 08:54:19 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:41.801 08:54:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:41.801 08:54:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.801 08:54:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.801 ************************************ 00:16:41.801 START TEST raid5f_rebuild_test_sb 00:16:41.801 ************************************ 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:41.801 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85133 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85133 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85133 ']' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.802 08:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.061 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:42.061 Zero copy mechanism will not be used. 00:16:42.061 [2024-09-28 08:54:19.803189] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:42.061 [2024-09-28 08:54:19.803296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85133 ] 00:16:42.061 [2024-09-28 08:54:19.968946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.321 [2024-09-28 08:54:20.213047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.580 [2024-09-28 08:54:20.438834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.580 [2024-09-28 08:54:20.438951] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.840 BaseBdev1_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.840 [2024-09-28 08:54:20.680846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:42.840 [2024-09-28 08:54:20.680921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.840 [2024-09-28 08:54:20.680945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:42.840 [2024-09-28 08:54:20.680960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.840 [2024-09-28 08:54:20.683212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.840 [2024-09-28 08:54:20.683249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.840 BaseBdev1 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.840 BaseBdev2_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.840 [2024-09-28 08:54:20.775686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:42.840 [2024-09-28 08:54:20.775744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.840 [2024-09-28 08:54:20.775765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:42.840 [2024-09-28 08:54:20.775775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.840 [2024-09-28 08:54:20.777892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.840 [2024-09-28 08:54:20.777930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.840 BaseBdev2 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.840 BaseBdev3_malloc 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:42.840 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.841 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 [2024-09-28 08:54:20.837789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:43.101 [2024-09-28 08:54:20.837842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.101 [2024-09-28 08:54:20.837864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:43.101 [2024-09-28 08:54:20.837875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.101 [2024-09-28 08:54:20.840244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.101 [2024-09-28 08:54:20.840284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:43.101 BaseBdev3 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 BaseBdev4_malloc 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 [2024-09-28 08:54:20.899008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:43.101 [2024-09-28 08:54:20.899059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.101 [2024-09-28 08:54:20.899077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:43.101 [2024-09-28 08:54:20.899088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.101 [2024-09-28 08:54:20.901361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.101 [2024-09-28 08:54:20.901399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:43.101 BaseBdev4 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 spare_malloc 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 spare_delay 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 [2024-09-28 08:54:20.973034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.101 [2024-09-28 08:54:20.973163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.101 [2024-09-28 08:54:20.973185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:43.101 [2024-09-28 08:54:20.973196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.101 [2024-09-28 08:54:20.975435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.101 [2024-09-28 08:54:20.975472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.101 spare 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 [2024-09-28 08:54:20.985086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.101 [2024-09-28 08:54:20.987022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.101 [2024-09-28 08:54:20.987087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.101 [2024-09-28 08:54:20.987134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:43.101 [2024-09-28 08:54:20.987312] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:43.101 [2024-09-28 08:54:20.987326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:43.101 [2024-09-28 08:54:20.987595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.101 [2024-09-28 08:54:20.994047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:43.101 [2024-09-28 08:54:20.994078] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:43.101 [2024-09-28 08:54:20.994244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.101 08:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.101 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.101 "name": "raid_bdev1", 00:16:43.101 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:43.101 "strip_size_kb": 64, 00:16:43.101 "state": "online", 00:16:43.101 "raid_level": "raid5f", 00:16:43.101 "superblock": true, 00:16:43.101 "num_base_bdevs": 4, 00:16:43.101 "num_base_bdevs_discovered": 4, 00:16:43.101 "num_base_bdevs_operational": 4, 00:16:43.101 "base_bdevs_list": [ 00:16:43.101 { 00:16:43.101 "name": "BaseBdev1", 00:16:43.101 "uuid": "22cbef34-3788-5d29-a6f1-d850fcc2cb80", 00:16:43.101 "is_configured": true, 00:16:43.101 "data_offset": 2048, 00:16:43.101 "data_size": 63488 00:16:43.101 }, 00:16:43.101 { 00:16:43.101 "name": "BaseBdev2", 00:16:43.101 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:43.101 "is_configured": true, 00:16:43.101 "data_offset": 2048, 00:16:43.101 "data_size": 63488 00:16:43.101 }, 00:16:43.101 { 00:16:43.101 "name": "BaseBdev3", 00:16:43.101 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:43.101 "is_configured": true, 00:16:43.101 "data_offset": 2048, 00:16:43.101 "data_size": 63488 00:16:43.102 }, 00:16:43.102 { 00:16:43.102 "name": "BaseBdev4", 00:16:43.102 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:43.102 "is_configured": true, 00:16:43.102 "data_offset": 2048, 00:16:43.102 "data_size": 63488 00:16:43.102 } 00:16:43.102 ] 00:16:43.102 }' 00:16:43.102 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.102 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:43.671 [2024-09-28 08:54:21.486247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.671 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:43.931 [2024-09-28 08:54:21.745664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:43.931 /dev/nbd0 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.931 1+0 records in 00:16:43.931 1+0 records out 00:16:43.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004471 s, 9.2 MB/s 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:43.931 08:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:44.872 496+0 records in 00:16:44.872 496+0 records out 00:16:44.872 97517568 bytes (98 MB, 93 MiB) copied, 0.82995 s, 117 MB/s 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.872 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.132 [2024-09-28 08:54:22.881821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.132 [2024-09-28 08:54:22.911741] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.132 "name": "raid_bdev1", 00:16:45.132 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:45.132 "strip_size_kb": 64, 00:16:45.132 "state": "online", 00:16:45.132 "raid_level": "raid5f", 00:16:45.132 "superblock": true, 00:16:45.132 "num_base_bdevs": 4, 00:16:45.132 "num_base_bdevs_discovered": 3, 00:16:45.132 "num_base_bdevs_operational": 3, 00:16:45.132 "base_bdevs_list": [ 00:16:45.132 { 00:16:45.132 "name": null, 00:16:45.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.132 "is_configured": false, 00:16:45.132 "data_offset": 0, 00:16:45.132 "data_size": 63488 00:16:45.132 }, 00:16:45.132 { 00:16:45.132 "name": "BaseBdev2", 00:16:45.132 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:45.132 "is_configured": true, 00:16:45.132 "data_offset": 2048, 00:16:45.132 "data_size": 63488 00:16:45.132 }, 00:16:45.132 { 00:16:45.132 "name": "BaseBdev3", 00:16:45.132 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:45.132 "is_configured": true, 00:16:45.132 "data_offset": 2048, 00:16:45.132 "data_size": 63488 00:16:45.132 }, 00:16:45.132 { 00:16:45.132 "name": "BaseBdev4", 00:16:45.132 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:45.132 "is_configured": true, 00:16:45.132 "data_offset": 2048, 00:16:45.132 "data_size": 63488 00:16:45.132 } 00:16:45.132 ] 00:16:45.132 }' 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.132 08:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.392 08:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.392 08:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.392 08:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.392 [2024-09-28 08:54:23.366892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.392 [2024-09-28 08:54:23.380560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:45.392 08:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.392 08:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:45.651 [2024-09-28 08:54:23.390310] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.590 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.590 "name": "raid_bdev1", 00:16:46.590 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:46.590 "strip_size_kb": 64, 00:16:46.590 "state": "online", 00:16:46.590 "raid_level": "raid5f", 00:16:46.590 "superblock": true, 00:16:46.590 "num_base_bdevs": 4, 00:16:46.590 "num_base_bdevs_discovered": 4, 00:16:46.590 "num_base_bdevs_operational": 4, 00:16:46.590 "process": { 00:16:46.590 "type": "rebuild", 00:16:46.590 "target": "spare", 00:16:46.590 "progress": { 00:16:46.590 "blocks": 19200, 00:16:46.591 "percent": 10 00:16:46.591 } 00:16:46.591 }, 00:16:46.591 "base_bdevs_list": [ 00:16:46.591 { 00:16:46.591 "name": "spare", 00:16:46.591 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:46.591 "is_configured": true, 00:16:46.591 "data_offset": 2048, 00:16:46.591 "data_size": 63488 00:16:46.591 }, 00:16:46.591 { 00:16:46.591 "name": "BaseBdev2", 00:16:46.591 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:46.591 "is_configured": true, 00:16:46.591 "data_offset": 2048, 00:16:46.591 "data_size": 63488 00:16:46.591 }, 00:16:46.591 { 00:16:46.591 "name": "BaseBdev3", 00:16:46.591 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:46.591 "is_configured": true, 00:16:46.591 "data_offset": 2048, 00:16:46.591 "data_size": 63488 00:16:46.591 }, 00:16:46.591 { 00:16:46.591 "name": "BaseBdev4", 00:16:46.591 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:46.591 "is_configured": true, 00:16:46.591 "data_offset": 2048, 00:16:46.591 "data_size": 63488 00:16:46.591 } 00:16:46.591 ] 00:16:46.591 }' 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.591 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.591 [2024-09-28 08:54:24.533333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.851 [2024-09-28 08:54:24.597371] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.851 [2024-09-28 08:54:24.597433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.851 [2024-09-28 08:54:24.597450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.851 [2024-09-28 08:54:24.597464] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.851 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.851 "name": "raid_bdev1", 00:16:46.851 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:46.851 "strip_size_kb": 64, 00:16:46.851 "state": "online", 00:16:46.851 "raid_level": "raid5f", 00:16:46.851 "superblock": true, 00:16:46.851 "num_base_bdevs": 4, 00:16:46.851 "num_base_bdevs_discovered": 3, 00:16:46.851 "num_base_bdevs_operational": 3, 00:16:46.851 "base_bdevs_list": [ 00:16:46.851 { 00:16:46.851 "name": null, 00:16:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.851 "is_configured": false, 00:16:46.851 "data_offset": 0, 00:16:46.851 "data_size": 63488 00:16:46.851 }, 00:16:46.851 { 00:16:46.851 "name": "BaseBdev2", 00:16:46.851 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:46.851 "is_configured": true, 00:16:46.851 "data_offset": 2048, 00:16:46.851 "data_size": 63488 00:16:46.851 }, 00:16:46.851 { 00:16:46.851 "name": "BaseBdev3", 00:16:46.851 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:46.852 "is_configured": true, 00:16:46.852 "data_offset": 2048, 00:16:46.852 "data_size": 63488 00:16:46.852 }, 00:16:46.852 { 00:16:46.852 "name": "BaseBdev4", 00:16:46.852 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:46.852 "is_configured": true, 00:16:46.852 "data_offset": 2048, 00:16:46.852 "data_size": 63488 00:16:46.852 } 00:16:46.852 ] 00:16:46.852 }' 00:16:46.852 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.852 08:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.421 "name": "raid_bdev1", 00:16:47.421 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:47.421 "strip_size_kb": 64, 00:16:47.421 "state": "online", 00:16:47.421 "raid_level": "raid5f", 00:16:47.421 "superblock": true, 00:16:47.421 "num_base_bdevs": 4, 00:16:47.421 "num_base_bdevs_discovered": 3, 00:16:47.421 "num_base_bdevs_operational": 3, 00:16:47.421 "base_bdevs_list": [ 00:16:47.421 { 00:16:47.421 "name": null, 00:16:47.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.421 "is_configured": false, 00:16:47.421 "data_offset": 0, 00:16:47.421 "data_size": 63488 00:16:47.421 }, 00:16:47.421 { 00:16:47.421 "name": "BaseBdev2", 00:16:47.421 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:47.421 "is_configured": true, 00:16:47.421 "data_offset": 2048, 00:16:47.421 "data_size": 63488 00:16:47.421 }, 00:16:47.421 { 00:16:47.421 "name": "BaseBdev3", 00:16:47.421 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:47.421 "is_configured": true, 00:16:47.421 "data_offset": 2048, 00:16:47.421 "data_size": 63488 00:16:47.421 }, 00:16:47.421 { 00:16:47.421 "name": "BaseBdev4", 00:16:47.421 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:47.421 "is_configured": true, 00:16:47.421 "data_offset": 2048, 00:16:47.421 "data_size": 63488 00:16:47.421 } 00:16:47.421 ] 00:16:47.421 }' 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.421 [2024-09-28 08:54:25.278600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.421 [2024-09-28 08:54:25.292079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.421 08:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:47.421 [2024-09-28 08:54:25.301132] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.363 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.363 "name": "raid_bdev1", 00:16:48.363 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:48.363 "strip_size_kb": 64, 00:16:48.363 "state": "online", 00:16:48.363 "raid_level": "raid5f", 00:16:48.363 "superblock": true, 00:16:48.363 "num_base_bdevs": 4, 00:16:48.363 "num_base_bdevs_discovered": 4, 00:16:48.363 "num_base_bdevs_operational": 4, 00:16:48.363 "process": { 00:16:48.363 "type": "rebuild", 00:16:48.363 "target": "spare", 00:16:48.363 "progress": { 00:16:48.363 "blocks": 19200, 00:16:48.363 "percent": 10 00:16:48.363 } 00:16:48.363 }, 00:16:48.363 "base_bdevs_list": [ 00:16:48.363 { 00:16:48.363 "name": "spare", 00:16:48.363 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:48.363 "is_configured": true, 00:16:48.363 "data_offset": 2048, 00:16:48.363 "data_size": 63488 00:16:48.363 }, 00:16:48.363 { 00:16:48.363 "name": "BaseBdev2", 00:16:48.363 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:48.363 "is_configured": true, 00:16:48.363 "data_offset": 2048, 00:16:48.363 "data_size": 63488 00:16:48.363 }, 00:16:48.363 { 00:16:48.363 "name": "BaseBdev3", 00:16:48.363 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:48.363 "is_configured": true, 00:16:48.363 "data_offset": 2048, 00:16:48.363 "data_size": 63488 00:16:48.363 }, 00:16:48.363 { 00:16:48.363 "name": "BaseBdev4", 00:16:48.363 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:48.363 "is_configured": true, 00:16:48.363 "data_offset": 2048, 00:16:48.363 "data_size": 63488 00:16:48.363 } 00:16:48.363 ] 00:16:48.363 }' 00:16:48.364 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:48.623 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=646 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.623 "name": "raid_bdev1", 00:16:48.623 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:48.623 "strip_size_kb": 64, 00:16:48.623 "state": "online", 00:16:48.623 "raid_level": "raid5f", 00:16:48.623 "superblock": true, 00:16:48.623 "num_base_bdevs": 4, 00:16:48.623 "num_base_bdevs_discovered": 4, 00:16:48.623 "num_base_bdevs_operational": 4, 00:16:48.623 "process": { 00:16:48.623 "type": "rebuild", 00:16:48.623 "target": "spare", 00:16:48.623 "progress": { 00:16:48.623 "blocks": 21120, 00:16:48.623 "percent": 11 00:16:48.623 } 00:16:48.623 }, 00:16:48.623 "base_bdevs_list": [ 00:16:48.623 { 00:16:48.623 "name": "spare", 00:16:48.623 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:48.623 "is_configured": true, 00:16:48.623 "data_offset": 2048, 00:16:48.623 "data_size": 63488 00:16:48.623 }, 00:16:48.623 { 00:16:48.623 "name": "BaseBdev2", 00:16:48.623 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:48.623 "is_configured": true, 00:16:48.623 "data_offset": 2048, 00:16:48.623 "data_size": 63488 00:16:48.623 }, 00:16:48.623 { 00:16:48.623 "name": "BaseBdev3", 00:16:48.623 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:48.623 "is_configured": true, 00:16:48.623 "data_offset": 2048, 00:16:48.623 "data_size": 63488 00:16:48.623 }, 00:16:48.623 { 00:16:48.623 "name": "BaseBdev4", 00:16:48.623 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:48.623 "is_configured": true, 00:16:48.623 "data_offset": 2048, 00:16:48.623 "data_size": 63488 00:16:48.623 } 00:16:48.623 ] 00:16:48.623 }' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.623 08:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.004 "name": "raid_bdev1", 00:16:50.004 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:50.004 "strip_size_kb": 64, 00:16:50.004 "state": "online", 00:16:50.004 "raid_level": "raid5f", 00:16:50.004 "superblock": true, 00:16:50.004 "num_base_bdevs": 4, 00:16:50.004 "num_base_bdevs_discovered": 4, 00:16:50.004 "num_base_bdevs_operational": 4, 00:16:50.004 "process": { 00:16:50.004 "type": "rebuild", 00:16:50.004 "target": "spare", 00:16:50.004 "progress": { 00:16:50.004 "blocks": 42240, 00:16:50.004 "percent": 22 00:16:50.004 } 00:16:50.004 }, 00:16:50.004 "base_bdevs_list": [ 00:16:50.004 { 00:16:50.004 "name": "spare", 00:16:50.004 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev2", 00:16:50.004 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev3", 00:16:50.004 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 }, 00:16:50.004 { 00:16:50.004 "name": "BaseBdev4", 00:16:50.004 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:50.004 "is_configured": true, 00:16:50.004 "data_offset": 2048, 00:16:50.004 "data_size": 63488 00:16:50.004 } 00:16:50.004 ] 00:16:50.004 }' 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.004 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.005 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.005 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.005 08:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.943 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.943 "name": "raid_bdev1", 00:16:50.943 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:50.944 "strip_size_kb": 64, 00:16:50.944 "state": "online", 00:16:50.944 "raid_level": "raid5f", 00:16:50.944 "superblock": true, 00:16:50.944 "num_base_bdevs": 4, 00:16:50.944 "num_base_bdevs_discovered": 4, 00:16:50.944 "num_base_bdevs_operational": 4, 00:16:50.944 "process": { 00:16:50.944 "type": "rebuild", 00:16:50.944 "target": "spare", 00:16:50.944 "progress": { 00:16:50.944 "blocks": 65280, 00:16:50.944 "percent": 34 00:16:50.944 } 00:16:50.944 }, 00:16:50.944 "base_bdevs_list": [ 00:16:50.944 { 00:16:50.944 "name": "spare", 00:16:50.944 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:50.944 "is_configured": true, 00:16:50.944 "data_offset": 2048, 00:16:50.944 "data_size": 63488 00:16:50.944 }, 00:16:50.944 { 00:16:50.944 "name": "BaseBdev2", 00:16:50.944 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:50.944 "is_configured": true, 00:16:50.944 "data_offset": 2048, 00:16:50.944 "data_size": 63488 00:16:50.944 }, 00:16:50.944 { 00:16:50.944 "name": "BaseBdev3", 00:16:50.944 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:50.944 "is_configured": true, 00:16:50.944 "data_offset": 2048, 00:16:50.944 "data_size": 63488 00:16:50.944 }, 00:16:50.944 { 00:16:50.944 "name": "BaseBdev4", 00:16:50.944 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:50.944 "is_configured": true, 00:16:50.944 "data_offset": 2048, 00:16:50.944 "data_size": 63488 00:16:50.944 } 00:16:50.944 ] 00:16:50.944 }' 00:16:50.944 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.944 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.944 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.944 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.944 08:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.324 "name": "raid_bdev1", 00:16:52.324 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:52.324 "strip_size_kb": 64, 00:16:52.324 "state": "online", 00:16:52.324 "raid_level": "raid5f", 00:16:52.324 "superblock": true, 00:16:52.324 "num_base_bdevs": 4, 00:16:52.324 "num_base_bdevs_discovered": 4, 00:16:52.324 "num_base_bdevs_operational": 4, 00:16:52.324 "process": { 00:16:52.324 "type": "rebuild", 00:16:52.324 "target": "spare", 00:16:52.324 "progress": { 00:16:52.324 "blocks": 86400, 00:16:52.324 "percent": 45 00:16:52.324 } 00:16:52.324 }, 00:16:52.324 "base_bdevs_list": [ 00:16:52.324 { 00:16:52.324 "name": "spare", 00:16:52.324 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev2", 00:16:52.324 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev3", 00:16:52.324 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev4", 00:16:52.324 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 } 00:16:52.324 ] 00:16:52.324 }' 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.324 08:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.324 08:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.324 08:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.263 "name": "raid_bdev1", 00:16:53.263 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:53.263 "strip_size_kb": 64, 00:16:53.263 "state": "online", 00:16:53.263 "raid_level": "raid5f", 00:16:53.263 "superblock": true, 00:16:53.263 "num_base_bdevs": 4, 00:16:53.263 "num_base_bdevs_discovered": 4, 00:16:53.263 "num_base_bdevs_operational": 4, 00:16:53.263 "process": { 00:16:53.263 "type": "rebuild", 00:16:53.263 "target": "spare", 00:16:53.263 "progress": { 00:16:53.263 "blocks": 109440, 00:16:53.263 "percent": 57 00:16:53.263 } 00:16:53.263 }, 00:16:53.263 "base_bdevs_list": [ 00:16:53.263 { 00:16:53.263 "name": "spare", 00:16:53.263 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:53.263 "is_configured": true, 00:16:53.263 "data_offset": 2048, 00:16:53.263 "data_size": 63488 00:16:53.263 }, 00:16:53.263 { 00:16:53.263 "name": "BaseBdev2", 00:16:53.263 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:53.263 "is_configured": true, 00:16:53.263 "data_offset": 2048, 00:16:53.263 "data_size": 63488 00:16:53.263 }, 00:16:53.263 { 00:16:53.263 "name": "BaseBdev3", 00:16:53.263 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:53.263 "is_configured": true, 00:16:53.263 "data_offset": 2048, 00:16:53.263 "data_size": 63488 00:16:53.263 }, 00:16:53.263 { 00:16:53.263 "name": "BaseBdev4", 00:16:53.263 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:53.263 "is_configured": true, 00:16:53.263 "data_offset": 2048, 00:16:53.263 "data_size": 63488 00:16:53.263 } 00:16:53.263 ] 00:16:53.263 }' 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.263 08:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.202 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.462 "name": "raid_bdev1", 00:16:54.462 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:54.462 "strip_size_kb": 64, 00:16:54.462 "state": "online", 00:16:54.462 "raid_level": "raid5f", 00:16:54.462 "superblock": true, 00:16:54.462 "num_base_bdevs": 4, 00:16:54.462 "num_base_bdevs_discovered": 4, 00:16:54.462 "num_base_bdevs_operational": 4, 00:16:54.462 "process": { 00:16:54.462 "type": "rebuild", 00:16:54.462 "target": "spare", 00:16:54.462 "progress": { 00:16:54.462 "blocks": 130560, 00:16:54.462 "percent": 68 00:16:54.462 } 00:16:54.462 }, 00:16:54.462 "base_bdevs_list": [ 00:16:54.462 { 00:16:54.462 "name": "spare", 00:16:54.462 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:54.462 "is_configured": true, 00:16:54.462 "data_offset": 2048, 00:16:54.462 "data_size": 63488 00:16:54.462 }, 00:16:54.462 { 00:16:54.462 "name": "BaseBdev2", 00:16:54.462 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:54.462 "is_configured": true, 00:16:54.462 "data_offset": 2048, 00:16:54.462 "data_size": 63488 00:16:54.462 }, 00:16:54.462 { 00:16:54.462 "name": "BaseBdev3", 00:16:54.462 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:54.462 "is_configured": true, 00:16:54.462 "data_offset": 2048, 00:16:54.462 "data_size": 63488 00:16:54.462 }, 00:16:54.462 { 00:16:54.462 "name": "BaseBdev4", 00:16:54.462 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:54.462 "is_configured": true, 00:16:54.462 "data_offset": 2048, 00:16:54.462 "data_size": 63488 00:16:54.462 } 00:16:54.462 ] 00:16:54.462 }' 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.462 08:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.400 "name": "raid_bdev1", 00:16:55.400 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:55.400 "strip_size_kb": 64, 00:16:55.400 "state": "online", 00:16:55.400 "raid_level": "raid5f", 00:16:55.400 "superblock": true, 00:16:55.400 "num_base_bdevs": 4, 00:16:55.400 "num_base_bdevs_discovered": 4, 00:16:55.400 "num_base_bdevs_operational": 4, 00:16:55.400 "process": { 00:16:55.400 "type": "rebuild", 00:16:55.400 "target": "spare", 00:16:55.400 "progress": { 00:16:55.400 "blocks": 151680, 00:16:55.400 "percent": 79 00:16:55.400 } 00:16:55.400 }, 00:16:55.400 "base_bdevs_list": [ 00:16:55.400 { 00:16:55.400 "name": "spare", 00:16:55.400 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:55.400 "is_configured": true, 00:16:55.400 "data_offset": 2048, 00:16:55.400 "data_size": 63488 00:16:55.400 }, 00:16:55.400 { 00:16:55.400 "name": "BaseBdev2", 00:16:55.400 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:55.400 "is_configured": true, 00:16:55.400 "data_offset": 2048, 00:16:55.400 "data_size": 63488 00:16:55.400 }, 00:16:55.400 { 00:16:55.400 "name": "BaseBdev3", 00:16:55.400 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:55.400 "is_configured": true, 00:16:55.400 "data_offset": 2048, 00:16:55.400 "data_size": 63488 00:16:55.400 }, 00:16:55.400 { 00:16:55.400 "name": "BaseBdev4", 00:16:55.400 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:55.400 "is_configured": true, 00:16:55.400 "data_offset": 2048, 00:16:55.400 "data_size": 63488 00:16:55.400 } 00:16:55.400 ] 00:16:55.400 }' 00:16:55.400 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.662 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.662 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.662 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.662 08:54:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.601 "name": "raid_bdev1", 00:16:56.601 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:56.601 "strip_size_kb": 64, 00:16:56.601 "state": "online", 00:16:56.601 "raid_level": "raid5f", 00:16:56.601 "superblock": true, 00:16:56.601 "num_base_bdevs": 4, 00:16:56.601 "num_base_bdevs_discovered": 4, 00:16:56.601 "num_base_bdevs_operational": 4, 00:16:56.601 "process": { 00:16:56.601 "type": "rebuild", 00:16:56.601 "target": "spare", 00:16:56.601 "progress": { 00:16:56.601 "blocks": 174720, 00:16:56.601 "percent": 91 00:16:56.601 } 00:16:56.601 }, 00:16:56.601 "base_bdevs_list": [ 00:16:56.601 { 00:16:56.601 "name": "spare", 00:16:56.601 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 2048, 00:16:56.601 "data_size": 63488 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev2", 00:16:56.601 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 2048, 00:16:56.601 "data_size": 63488 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev3", 00:16:56.601 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 2048, 00:16:56.601 "data_size": 63488 00:16:56.601 }, 00:16:56.601 { 00:16:56.601 "name": "BaseBdev4", 00:16:56.601 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:56.601 "is_configured": true, 00:16:56.601 "data_offset": 2048, 00:16:56.601 "data_size": 63488 00:16:56.601 } 00:16:56.601 ] 00:16:56.601 }' 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.601 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.860 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.860 08:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.429 [2024-09-28 08:54:35.350311] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:57.429 [2024-09-28 08:54:35.350423] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:57.429 [2024-09-28 08:54:35.350565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.689 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.690 "name": "raid_bdev1", 00:16:57.690 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:57.690 "strip_size_kb": 64, 00:16:57.690 "state": "online", 00:16:57.690 "raid_level": "raid5f", 00:16:57.690 "superblock": true, 00:16:57.690 "num_base_bdevs": 4, 00:16:57.690 "num_base_bdevs_discovered": 4, 00:16:57.690 "num_base_bdevs_operational": 4, 00:16:57.690 "base_bdevs_list": [ 00:16:57.690 { 00:16:57.690 "name": "spare", 00:16:57.690 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:57.690 "is_configured": true, 00:16:57.690 "data_offset": 2048, 00:16:57.690 "data_size": 63488 00:16:57.690 }, 00:16:57.690 { 00:16:57.690 "name": "BaseBdev2", 00:16:57.690 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:57.690 "is_configured": true, 00:16:57.690 "data_offset": 2048, 00:16:57.690 "data_size": 63488 00:16:57.690 }, 00:16:57.690 { 00:16:57.690 "name": "BaseBdev3", 00:16:57.690 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:57.690 "is_configured": true, 00:16:57.690 "data_offset": 2048, 00:16:57.690 "data_size": 63488 00:16:57.690 }, 00:16:57.690 { 00:16:57.690 "name": "BaseBdev4", 00:16:57.690 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:57.690 "is_configured": true, 00:16:57.690 "data_offset": 2048, 00:16:57.690 "data_size": 63488 00:16:57.690 } 00:16:57.690 ] 00:16:57.690 }' 00:16:57.690 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.949 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.949 "name": "raid_bdev1", 00:16:57.949 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:57.949 "strip_size_kb": 64, 00:16:57.949 "state": "online", 00:16:57.949 "raid_level": "raid5f", 00:16:57.949 "superblock": true, 00:16:57.949 "num_base_bdevs": 4, 00:16:57.949 "num_base_bdevs_discovered": 4, 00:16:57.949 "num_base_bdevs_operational": 4, 00:16:57.949 "base_bdevs_list": [ 00:16:57.949 { 00:16:57.949 "name": "spare", 00:16:57.950 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:57.950 "is_configured": true, 00:16:57.950 "data_offset": 2048, 00:16:57.950 "data_size": 63488 00:16:57.950 }, 00:16:57.950 { 00:16:57.950 "name": "BaseBdev2", 00:16:57.950 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:57.950 "is_configured": true, 00:16:57.950 "data_offset": 2048, 00:16:57.950 "data_size": 63488 00:16:57.950 }, 00:16:57.950 { 00:16:57.950 "name": "BaseBdev3", 00:16:57.950 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:57.950 "is_configured": true, 00:16:57.950 "data_offset": 2048, 00:16:57.950 "data_size": 63488 00:16:57.950 }, 00:16:57.950 { 00:16:57.950 "name": "BaseBdev4", 00:16:57.950 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:57.950 "is_configured": true, 00:16:57.950 "data_offset": 2048, 00:16:57.950 "data_size": 63488 00:16:57.950 } 00:16:57.950 ] 00:16:57.950 }' 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.950 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.208 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.208 "name": "raid_bdev1", 00:16:58.208 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:58.208 "strip_size_kb": 64, 00:16:58.208 "state": "online", 00:16:58.208 "raid_level": "raid5f", 00:16:58.208 "superblock": true, 00:16:58.208 "num_base_bdevs": 4, 00:16:58.208 "num_base_bdevs_discovered": 4, 00:16:58.208 "num_base_bdevs_operational": 4, 00:16:58.208 "base_bdevs_list": [ 00:16:58.208 { 00:16:58.208 "name": "spare", 00:16:58.208 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:58.208 "is_configured": true, 00:16:58.208 "data_offset": 2048, 00:16:58.208 "data_size": 63488 00:16:58.208 }, 00:16:58.208 { 00:16:58.208 "name": "BaseBdev2", 00:16:58.208 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:58.209 "is_configured": true, 00:16:58.209 "data_offset": 2048, 00:16:58.209 "data_size": 63488 00:16:58.209 }, 00:16:58.209 { 00:16:58.209 "name": "BaseBdev3", 00:16:58.209 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:58.209 "is_configured": true, 00:16:58.209 "data_offset": 2048, 00:16:58.209 "data_size": 63488 00:16:58.209 }, 00:16:58.209 { 00:16:58.209 "name": "BaseBdev4", 00:16:58.209 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:58.209 "is_configured": true, 00:16:58.209 "data_offset": 2048, 00:16:58.209 "data_size": 63488 00:16:58.209 } 00:16:58.209 ] 00:16:58.209 }' 00:16:58.209 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.209 08:54:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.468 [2024-09-28 08:54:36.291578] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.468 [2024-09-28 08:54:36.291666] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.468 [2024-09-28 08:54:36.291774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.468 [2024-09-28 08:54:36.291904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.468 [2024-09-28 08:54:36.291957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.468 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:58.728 /dev/nbd0 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:58.728 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.729 1+0 records in 00:16:58.729 1+0 records out 00:16:58.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406021 s, 10.1 MB/s 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.729 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:58.988 /dev/nbd1 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.988 1+0 records in 00:16:58.988 1+0 records out 00:16:58.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437357 s, 9.4 MB/s 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.988 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.989 08:54:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.249 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.509 [2024-09-28 08:54:37.473685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:59.509 [2024-09-28 08:54:37.473741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.509 [2024-09-28 08:54:37.473767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:59.509 [2024-09-28 08:54:37.473776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.509 [2024-09-28 08:54:37.476294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.509 [2024-09-28 08:54:37.476390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:59.509 [2024-09-28 08:54:37.476501] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:59.509 [2024-09-28 08:54:37.476559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.509 [2024-09-28 08:54:37.476728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.509 [2024-09-28 08:54:37.476830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.509 [2024-09-28 08:54:37.476919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:59.509 spare 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.509 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.768 [2024-09-28 08:54:37.576824] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:59.768 [2024-09-28 08:54:37.576894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.768 [2024-09-28 08:54:37.577199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:59.768 [2024-09-28 08:54:37.583859] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:59.768 [2024-09-28 08:54:37.583919] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:59.768 [2024-09-28 08:54:37.584147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.768 "name": "raid_bdev1", 00:16:59.768 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:16:59.768 "strip_size_kb": 64, 00:16:59.768 "state": "online", 00:16:59.768 "raid_level": "raid5f", 00:16:59.768 "superblock": true, 00:16:59.768 "num_base_bdevs": 4, 00:16:59.768 "num_base_bdevs_discovered": 4, 00:16:59.768 "num_base_bdevs_operational": 4, 00:16:59.768 "base_bdevs_list": [ 00:16:59.768 { 00:16:59.768 "name": "spare", 00:16:59.768 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:16:59.768 "is_configured": true, 00:16:59.768 "data_offset": 2048, 00:16:59.768 "data_size": 63488 00:16:59.768 }, 00:16:59.768 { 00:16:59.768 "name": "BaseBdev2", 00:16:59.768 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:16:59.768 "is_configured": true, 00:16:59.768 "data_offset": 2048, 00:16:59.768 "data_size": 63488 00:16:59.768 }, 00:16:59.768 { 00:16:59.768 "name": "BaseBdev3", 00:16:59.768 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:16:59.768 "is_configured": true, 00:16:59.768 "data_offset": 2048, 00:16:59.768 "data_size": 63488 00:16:59.768 }, 00:16:59.768 { 00:16:59.768 "name": "BaseBdev4", 00:16:59.768 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:16:59.768 "is_configured": true, 00:16:59.768 "data_offset": 2048, 00:16:59.768 "data_size": 63488 00:16:59.768 } 00:16:59.768 ] 00:16:59.768 }' 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.768 08:54:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.337 "name": "raid_bdev1", 00:17:00.337 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:00.337 "strip_size_kb": 64, 00:17:00.337 "state": "online", 00:17:00.337 "raid_level": "raid5f", 00:17:00.337 "superblock": true, 00:17:00.337 "num_base_bdevs": 4, 00:17:00.337 "num_base_bdevs_discovered": 4, 00:17:00.337 "num_base_bdevs_operational": 4, 00:17:00.337 "base_bdevs_list": [ 00:17:00.337 { 00:17:00.337 "name": "spare", 00:17:00.337 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:17:00.337 "is_configured": true, 00:17:00.337 "data_offset": 2048, 00:17:00.337 "data_size": 63488 00:17:00.337 }, 00:17:00.337 { 00:17:00.337 "name": "BaseBdev2", 00:17:00.337 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:00.337 "is_configured": true, 00:17:00.337 "data_offset": 2048, 00:17:00.337 "data_size": 63488 00:17:00.337 }, 00:17:00.337 { 00:17:00.337 "name": "BaseBdev3", 00:17:00.337 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:00.337 "is_configured": true, 00:17:00.337 "data_offset": 2048, 00:17:00.337 "data_size": 63488 00:17:00.337 }, 00:17:00.337 { 00:17:00.337 "name": "BaseBdev4", 00:17:00.337 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:00.337 "is_configured": true, 00:17:00.337 "data_offset": 2048, 00:17:00.337 "data_size": 63488 00:17:00.337 } 00:17:00.337 ] 00:17:00.337 }' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.337 [2024-09-28 08:54:38.292058] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.337 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.596 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.596 "name": "raid_bdev1", 00:17:00.596 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:00.596 "strip_size_kb": 64, 00:17:00.596 "state": "online", 00:17:00.596 "raid_level": "raid5f", 00:17:00.596 "superblock": true, 00:17:00.596 "num_base_bdevs": 4, 00:17:00.596 "num_base_bdevs_discovered": 3, 00:17:00.596 "num_base_bdevs_operational": 3, 00:17:00.596 "base_bdevs_list": [ 00:17:00.596 { 00:17:00.596 "name": null, 00:17:00.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.596 "is_configured": false, 00:17:00.596 "data_offset": 0, 00:17:00.596 "data_size": 63488 00:17:00.596 }, 00:17:00.596 { 00:17:00.596 "name": "BaseBdev2", 00:17:00.596 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:00.596 "is_configured": true, 00:17:00.596 "data_offset": 2048, 00:17:00.596 "data_size": 63488 00:17:00.596 }, 00:17:00.596 { 00:17:00.596 "name": "BaseBdev3", 00:17:00.596 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:00.596 "is_configured": true, 00:17:00.596 "data_offset": 2048, 00:17:00.596 "data_size": 63488 00:17:00.596 }, 00:17:00.596 { 00:17:00.596 "name": "BaseBdev4", 00:17:00.596 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:00.596 "is_configured": true, 00:17:00.596 "data_offset": 2048, 00:17:00.596 "data_size": 63488 00:17:00.596 } 00:17:00.596 ] 00:17:00.596 }' 00:17:00.596 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.596 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.855 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.855 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.855 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.855 [2024-09-28 08:54:38.703497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.855 [2024-09-28 08:54:38.703760] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:00.855 [2024-09-28 08:54:38.703826] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:00.855 [2024-09-28 08:54:38.703893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.855 [2024-09-28 08:54:38.717615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:00.855 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.855 08:54:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:00.855 [2024-09-28 08:54:38.726453] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.793 "name": "raid_bdev1", 00:17:01.793 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:01.793 "strip_size_kb": 64, 00:17:01.793 "state": "online", 00:17:01.793 "raid_level": "raid5f", 00:17:01.793 "superblock": true, 00:17:01.793 "num_base_bdevs": 4, 00:17:01.793 "num_base_bdevs_discovered": 4, 00:17:01.793 "num_base_bdevs_operational": 4, 00:17:01.793 "process": { 00:17:01.793 "type": "rebuild", 00:17:01.793 "target": "spare", 00:17:01.793 "progress": { 00:17:01.793 "blocks": 19200, 00:17:01.793 "percent": 10 00:17:01.793 } 00:17:01.793 }, 00:17:01.793 "base_bdevs_list": [ 00:17:01.793 { 00:17:01.793 "name": "spare", 00:17:01.793 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:17:01.793 "is_configured": true, 00:17:01.793 "data_offset": 2048, 00:17:01.793 "data_size": 63488 00:17:01.793 }, 00:17:01.793 { 00:17:01.793 "name": "BaseBdev2", 00:17:01.793 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:01.793 "is_configured": true, 00:17:01.793 "data_offset": 2048, 00:17:01.793 "data_size": 63488 00:17:01.793 }, 00:17:01.793 { 00:17:01.793 "name": "BaseBdev3", 00:17:01.793 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:01.793 "is_configured": true, 00:17:01.793 "data_offset": 2048, 00:17:01.793 "data_size": 63488 00:17:01.793 }, 00:17:01.793 { 00:17:01.793 "name": "BaseBdev4", 00:17:01.793 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:01.793 "is_configured": true, 00:17:01.793 "data_offset": 2048, 00:17:01.793 "data_size": 63488 00:17:01.793 } 00:17:01.793 ] 00:17:01.793 }' 00:17:01.793 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.051 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.052 [2024-09-28 08:54:39.877575] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.052 [2024-09-28 08:54:39.933603] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.052 [2024-09-28 08:54:39.933733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.052 [2024-09-28 08:54:39.933774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.052 [2024-09-28 08:54:39.933800] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.052 08:54:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.052 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.052 "name": "raid_bdev1", 00:17:02.052 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:02.052 "strip_size_kb": 64, 00:17:02.052 "state": "online", 00:17:02.052 "raid_level": "raid5f", 00:17:02.052 "superblock": true, 00:17:02.052 "num_base_bdevs": 4, 00:17:02.052 "num_base_bdevs_discovered": 3, 00:17:02.052 "num_base_bdevs_operational": 3, 00:17:02.052 "base_bdevs_list": [ 00:17:02.052 { 00:17:02.052 "name": null, 00:17:02.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.052 "is_configured": false, 00:17:02.052 "data_offset": 0, 00:17:02.052 "data_size": 63488 00:17:02.052 }, 00:17:02.052 { 00:17:02.052 "name": "BaseBdev2", 00:17:02.052 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:02.052 "is_configured": true, 00:17:02.052 "data_offset": 2048, 00:17:02.052 "data_size": 63488 00:17:02.052 }, 00:17:02.052 { 00:17:02.052 "name": "BaseBdev3", 00:17:02.052 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:02.052 "is_configured": true, 00:17:02.052 "data_offset": 2048, 00:17:02.052 "data_size": 63488 00:17:02.052 }, 00:17:02.052 { 00:17:02.052 "name": "BaseBdev4", 00:17:02.052 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:02.052 "is_configured": true, 00:17:02.052 "data_offset": 2048, 00:17:02.052 "data_size": 63488 00:17:02.052 } 00:17:02.052 ] 00:17:02.052 }' 00:17:02.052 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.052 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.682 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.682 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 [2024-09-28 08:54:40.443766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.682 [2024-09-28 08:54:40.443856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.682 [2024-09-28 08:54:40.443887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:02.682 [2024-09-28 08:54:40.443899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.682 [2024-09-28 08:54:40.444433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.682 [2024-09-28 08:54:40.444454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.682 [2024-09-28 08:54:40.444545] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:02.682 [2024-09-28 08:54:40.444561] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:02.682 [2024-09-28 08:54:40.444571] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:02.682 [2024-09-28 08:54:40.444593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:02.682 [2024-09-28 08:54:40.457626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:02.682 spare 00:17:02.682 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.682 08:54:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:02.682 [2024-09-28 08:54:40.466049] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.620 "name": "raid_bdev1", 00:17:03.620 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:03.620 "strip_size_kb": 64, 00:17:03.620 "state": "online", 00:17:03.620 "raid_level": "raid5f", 00:17:03.620 "superblock": true, 00:17:03.620 "num_base_bdevs": 4, 00:17:03.620 "num_base_bdevs_discovered": 4, 00:17:03.620 "num_base_bdevs_operational": 4, 00:17:03.620 "process": { 00:17:03.620 "type": "rebuild", 00:17:03.620 "target": "spare", 00:17:03.620 "progress": { 00:17:03.620 "blocks": 19200, 00:17:03.620 "percent": 10 00:17:03.620 } 00:17:03.620 }, 00:17:03.620 "base_bdevs_list": [ 00:17:03.620 { 00:17:03.620 "name": "spare", 00:17:03.620 "uuid": "5af10d5f-6a87-5ad2-9791-b831edffe596", 00:17:03.620 "is_configured": true, 00:17:03.620 "data_offset": 2048, 00:17:03.620 "data_size": 63488 00:17:03.620 }, 00:17:03.620 { 00:17:03.620 "name": "BaseBdev2", 00:17:03.620 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:03.620 "is_configured": true, 00:17:03.620 "data_offset": 2048, 00:17:03.620 "data_size": 63488 00:17:03.620 }, 00:17:03.620 { 00:17:03.620 "name": "BaseBdev3", 00:17:03.620 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:03.620 "is_configured": true, 00:17:03.620 "data_offset": 2048, 00:17:03.620 "data_size": 63488 00:17:03.620 }, 00:17:03.620 { 00:17:03.620 "name": "BaseBdev4", 00:17:03.620 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:03.620 "is_configured": true, 00:17:03.620 "data_offset": 2048, 00:17:03.620 "data_size": 63488 00:17:03.620 } 00:17:03.620 ] 00:17:03.620 }' 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.620 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.620 [2024-09-28 08:54:41.609207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.880 [2024-09-28 08:54:41.673194] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:03.880 [2024-09-28 08:54:41.673293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.880 [2024-09-28 08:54:41.673331] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:03.880 [2024-09-28 08:54:41.673351] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.880 "name": "raid_bdev1", 00:17:03.880 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:03.880 "strip_size_kb": 64, 00:17:03.880 "state": "online", 00:17:03.880 "raid_level": "raid5f", 00:17:03.880 "superblock": true, 00:17:03.880 "num_base_bdevs": 4, 00:17:03.880 "num_base_bdevs_discovered": 3, 00:17:03.880 "num_base_bdevs_operational": 3, 00:17:03.880 "base_bdevs_list": [ 00:17:03.880 { 00:17:03.880 "name": null, 00:17:03.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.880 "is_configured": false, 00:17:03.880 "data_offset": 0, 00:17:03.880 "data_size": 63488 00:17:03.880 }, 00:17:03.880 { 00:17:03.880 "name": "BaseBdev2", 00:17:03.880 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:03.880 "is_configured": true, 00:17:03.880 "data_offset": 2048, 00:17:03.880 "data_size": 63488 00:17:03.880 }, 00:17:03.880 { 00:17:03.880 "name": "BaseBdev3", 00:17:03.880 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:03.880 "is_configured": true, 00:17:03.880 "data_offset": 2048, 00:17:03.880 "data_size": 63488 00:17:03.880 }, 00:17:03.880 { 00:17:03.880 "name": "BaseBdev4", 00:17:03.880 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:03.880 "is_configured": true, 00:17:03.880 "data_offset": 2048, 00:17:03.880 "data_size": 63488 00:17:03.880 } 00:17:03.880 ] 00:17:03.880 }' 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.880 08:54:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.139 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.398 "name": "raid_bdev1", 00:17:04.398 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:04.398 "strip_size_kb": 64, 00:17:04.398 "state": "online", 00:17:04.398 "raid_level": "raid5f", 00:17:04.398 "superblock": true, 00:17:04.398 "num_base_bdevs": 4, 00:17:04.398 "num_base_bdevs_discovered": 3, 00:17:04.398 "num_base_bdevs_operational": 3, 00:17:04.398 "base_bdevs_list": [ 00:17:04.398 { 00:17:04.398 "name": null, 00:17:04.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.398 "is_configured": false, 00:17:04.398 "data_offset": 0, 00:17:04.398 "data_size": 63488 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "name": "BaseBdev2", 00:17:04.398 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:04.398 "is_configured": true, 00:17:04.398 "data_offset": 2048, 00:17:04.398 "data_size": 63488 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "name": "BaseBdev3", 00:17:04.398 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:04.398 "is_configured": true, 00:17:04.398 "data_offset": 2048, 00:17:04.398 "data_size": 63488 00:17:04.398 }, 00:17:04.398 { 00:17:04.398 "name": "BaseBdev4", 00:17:04.398 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:04.398 "is_configured": true, 00:17:04.398 "data_offset": 2048, 00:17:04.398 "data_size": 63488 00:17:04.398 } 00:17:04.398 ] 00:17:04.398 }' 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.398 [2024-09-28 08:54:42.239941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:04.398 [2024-09-28 08:54:42.240000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.398 [2024-09-28 08:54:42.240025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:04.398 [2024-09-28 08:54:42.240035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.398 [2024-09-28 08:54:42.240545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.398 [2024-09-28 08:54:42.240563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:04.398 [2024-09-28 08:54:42.240669] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:04.398 [2024-09-28 08:54:42.240685] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:04.398 [2024-09-28 08:54:42.240699] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:04.398 [2024-09-28 08:54:42.240711] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:04.398 BaseBdev1 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.398 08:54:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.335 "name": "raid_bdev1", 00:17:05.335 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:05.335 "strip_size_kb": 64, 00:17:05.335 "state": "online", 00:17:05.335 "raid_level": "raid5f", 00:17:05.335 "superblock": true, 00:17:05.335 "num_base_bdevs": 4, 00:17:05.335 "num_base_bdevs_discovered": 3, 00:17:05.335 "num_base_bdevs_operational": 3, 00:17:05.335 "base_bdevs_list": [ 00:17:05.335 { 00:17:05.335 "name": null, 00:17:05.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.335 "is_configured": false, 00:17:05.335 "data_offset": 0, 00:17:05.335 "data_size": 63488 00:17:05.335 }, 00:17:05.335 { 00:17:05.335 "name": "BaseBdev2", 00:17:05.335 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:05.335 "is_configured": true, 00:17:05.335 "data_offset": 2048, 00:17:05.335 "data_size": 63488 00:17:05.335 }, 00:17:05.335 { 00:17:05.335 "name": "BaseBdev3", 00:17:05.335 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:05.335 "is_configured": true, 00:17:05.335 "data_offset": 2048, 00:17:05.335 "data_size": 63488 00:17:05.335 }, 00:17:05.335 { 00:17:05.335 "name": "BaseBdev4", 00:17:05.335 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:05.335 "is_configured": true, 00:17:05.335 "data_offset": 2048, 00:17:05.335 "data_size": 63488 00:17:05.335 } 00:17:05.335 ] 00:17:05.335 }' 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.335 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.903 "name": "raid_bdev1", 00:17:05.903 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:05.903 "strip_size_kb": 64, 00:17:05.903 "state": "online", 00:17:05.903 "raid_level": "raid5f", 00:17:05.903 "superblock": true, 00:17:05.903 "num_base_bdevs": 4, 00:17:05.903 "num_base_bdevs_discovered": 3, 00:17:05.903 "num_base_bdevs_operational": 3, 00:17:05.903 "base_bdevs_list": [ 00:17:05.903 { 00:17:05.903 "name": null, 00:17:05.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.903 "is_configured": false, 00:17:05.903 "data_offset": 0, 00:17:05.903 "data_size": 63488 00:17:05.903 }, 00:17:05.903 { 00:17:05.903 "name": "BaseBdev2", 00:17:05.903 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:05.903 "is_configured": true, 00:17:05.903 "data_offset": 2048, 00:17:05.903 "data_size": 63488 00:17:05.903 }, 00:17:05.903 { 00:17:05.903 "name": "BaseBdev3", 00:17:05.903 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:05.903 "is_configured": true, 00:17:05.903 "data_offset": 2048, 00:17:05.903 "data_size": 63488 00:17:05.903 }, 00:17:05.903 { 00:17:05.903 "name": "BaseBdev4", 00:17:05.903 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:05.903 "is_configured": true, 00:17:05.903 "data_offset": 2048, 00:17:05.903 "data_size": 63488 00:17:05.903 } 00:17:05.903 ] 00:17:05.903 }' 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.903 [2024-09-28 08:54:43.869470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.903 [2024-09-28 08:54:43.869727] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.903 [2024-09-28 08:54:43.869748] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:05.903 request: 00:17:05.903 { 00:17:05.903 "base_bdev": "BaseBdev1", 00:17:05.903 "raid_bdev": "raid_bdev1", 00:17:05.903 "method": "bdev_raid_add_base_bdev", 00:17:05.903 "req_id": 1 00:17:05.903 } 00:17:05.903 Got JSON-RPC error response 00:17:05.903 response: 00:17:05.903 { 00:17:05.903 "code": -22, 00:17:05.903 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:05.903 } 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.903 08:54:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.296 "name": "raid_bdev1", 00:17:07.296 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:07.296 "strip_size_kb": 64, 00:17:07.296 "state": "online", 00:17:07.296 "raid_level": "raid5f", 00:17:07.296 "superblock": true, 00:17:07.296 "num_base_bdevs": 4, 00:17:07.296 "num_base_bdevs_discovered": 3, 00:17:07.296 "num_base_bdevs_operational": 3, 00:17:07.296 "base_bdevs_list": [ 00:17:07.296 { 00:17:07.296 "name": null, 00:17:07.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.296 "is_configured": false, 00:17:07.296 "data_offset": 0, 00:17:07.296 "data_size": 63488 00:17:07.296 }, 00:17:07.296 { 00:17:07.296 "name": "BaseBdev2", 00:17:07.296 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:07.296 "is_configured": true, 00:17:07.296 "data_offset": 2048, 00:17:07.296 "data_size": 63488 00:17:07.296 }, 00:17:07.296 { 00:17:07.296 "name": "BaseBdev3", 00:17:07.296 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:07.296 "is_configured": true, 00:17:07.296 "data_offset": 2048, 00:17:07.296 "data_size": 63488 00:17:07.296 }, 00:17:07.296 { 00:17:07.296 "name": "BaseBdev4", 00:17:07.296 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:07.296 "is_configured": true, 00:17:07.296 "data_offset": 2048, 00:17:07.296 "data_size": 63488 00:17:07.296 } 00:17:07.296 ] 00:17:07.296 }' 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.296 08:54:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.557 "name": "raid_bdev1", 00:17:07.557 "uuid": "5fb48463-70a5-40b7-9165-80d74e14cdcc", 00:17:07.557 "strip_size_kb": 64, 00:17:07.557 "state": "online", 00:17:07.557 "raid_level": "raid5f", 00:17:07.557 "superblock": true, 00:17:07.557 "num_base_bdevs": 4, 00:17:07.557 "num_base_bdevs_discovered": 3, 00:17:07.557 "num_base_bdevs_operational": 3, 00:17:07.557 "base_bdevs_list": [ 00:17:07.557 { 00:17:07.557 "name": null, 00:17:07.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.557 "is_configured": false, 00:17:07.557 "data_offset": 0, 00:17:07.557 "data_size": 63488 00:17:07.557 }, 00:17:07.557 { 00:17:07.557 "name": "BaseBdev2", 00:17:07.557 "uuid": "6b344f62-3a39-5c45-9797-5d1653cb9929", 00:17:07.557 "is_configured": true, 00:17:07.557 "data_offset": 2048, 00:17:07.557 "data_size": 63488 00:17:07.557 }, 00:17:07.557 { 00:17:07.557 "name": "BaseBdev3", 00:17:07.557 "uuid": "ee33ea3b-2364-5111-b7d2-8c8110078a48", 00:17:07.557 "is_configured": true, 00:17:07.557 "data_offset": 2048, 00:17:07.557 "data_size": 63488 00:17:07.557 }, 00:17:07.557 { 00:17:07.557 "name": "BaseBdev4", 00:17:07.557 "uuid": "17c646a8-532d-5a8c-af1b-98b07893ad66", 00:17:07.557 "is_configured": true, 00:17:07.557 "data_offset": 2048, 00:17:07.557 "data_size": 63488 00:17:07.557 } 00:17:07.557 ] 00:17:07.557 }' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85133 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85133 ']' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85133 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85133 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85133' 00:17:07.557 killing process with pid 85133 00:17:07.557 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85133 00:17:07.557 Received shutdown signal, test time was about 60.000000 seconds 00:17:07.557 00:17:07.557 Latency(us) 00:17:07.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.557 =================================================================================================================== 00:17:07.558 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.558 08:54:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85133 00:17:07.558 [2024-09-28 08:54:45.482381] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.558 [2024-09-28 08:54:45.482534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.558 [2024-09-28 08:54:45.482618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.558 [2024-09-28 08:54:45.482632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:08.126 [2024-09-28 08:54:45.986679] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.508 08:54:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:09.508 00:17:09.508 real 0m27.603s 00:17:09.508 user 0m34.227s 00:17:09.508 sys 0m3.606s 00:17:09.508 08:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.508 08:54:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.508 ************************************ 00:17:09.508 END TEST raid5f_rebuild_test_sb 00:17:09.509 ************************************ 00:17:09.509 08:54:47 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:09.509 08:54:47 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:09.509 08:54:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:09.509 08:54:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.509 08:54:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.509 ************************************ 00:17:09.509 START TEST raid_state_function_test_sb_4k 00:17:09.509 ************************************ 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85952 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85952' 00:17:09.509 Process raid pid: 85952 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85952 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85952 ']' 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.509 08:54:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.509 [2024-09-28 08:54:47.496677] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:09.509 [2024-09-28 08:54:47.496883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.769 [2024-09-28 08:54:47.668115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.029 [2024-09-28 08:54:47.922423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.288 [2024-09-28 08:54:48.154811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.288 [2024-09-28 08:54:48.154924] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.548 [2024-09-28 08:54:48.344840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.548 [2024-09-28 08:54:48.344902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.548 [2024-09-28 08:54:48.344912] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.548 [2024-09-28 08:54:48.344922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.548 "name": "Existed_Raid", 00:17:10.548 "uuid": "5ff1930a-5470-40c0-bb96-55b4b7918a9f", 00:17:10.548 "strip_size_kb": 0, 00:17:10.548 "state": "configuring", 00:17:10.548 "raid_level": "raid1", 00:17:10.548 "superblock": true, 00:17:10.548 "num_base_bdevs": 2, 00:17:10.548 "num_base_bdevs_discovered": 0, 00:17:10.548 "num_base_bdevs_operational": 2, 00:17:10.548 "base_bdevs_list": [ 00:17:10.548 { 00:17:10.548 "name": "BaseBdev1", 00:17:10.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.548 "is_configured": false, 00:17:10.548 "data_offset": 0, 00:17:10.548 "data_size": 0 00:17:10.548 }, 00:17:10.548 { 00:17:10.548 "name": "BaseBdev2", 00:17:10.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.548 "is_configured": false, 00:17:10.548 "data_offset": 0, 00:17:10.548 "data_size": 0 00:17:10.548 } 00:17:10.548 ] 00:17:10.548 }' 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.548 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.809 [2024-09-28 08:54:48.763989] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.809 [2024-09-28 08:54:48.764072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.809 [2024-09-28 08:54:48.775994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.809 [2024-09-28 08:54:48.776073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.809 [2024-09-28 08:54:48.776101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:10.809 [2024-09-28 08:54:48.776127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.809 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.069 [2024-09-28 08:54:48.839319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.069 BaseBdev1 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.069 [ 00:17:11.069 { 00:17:11.069 "name": "BaseBdev1", 00:17:11.069 "aliases": [ 00:17:11.069 "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1" 00:17:11.069 ], 00:17:11.069 "product_name": "Malloc disk", 00:17:11.069 "block_size": 4096, 00:17:11.069 "num_blocks": 8192, 00:17:11.069 "uuid": "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1", 00:17:11.069 "assigned_rate_limits": { 00:17:11.069 "rw_ios_per_sec": 0, 00:17:11.069 "rw_mbytes_per_sec": 0, 00:17:11.069 "r_mbytes_per_sec": 0, 00:17:11.069 "w_mbytes_per_sec": 0 00:17:11.069 }, 00:17:11.069 "claimed": true, 00:17:11.069 "claim_type": "exclusive_write", 00:17:11.069 "zoned": false, 00:17:11.069 "supported_io_types": { 00:17:11.069 "read": true, 00:17:11.069 "write": true, 00:17:11.069 "unmap": true, 00:17:11.069 "flush": true, 00:17:11.069 "reset": true, 00:17:11.069 "nvme_admin": false, 00:17:11.069 "nvme_io": false, 00:17:11.069 "nvme_io_md": false, 00:17:11.069 "write_zeroes": true, 00:17:11.069 "zcopy": true, 00:17:11.069 "get_zone_info": false, 00:17:11.069 "zone_management": false, 00:17:11.069 "zone_append": false, 00:17:11.069 "compare": false, 00:17:11.069 "compare_and_write": false, 00:17:11.069 "abort": true, 00:17:11.069 "seek_hole": false, 00:17:11.069 "seek_data": false, 00:17:11.069 "copy": true, 00:17:11.069 "nvme_iov_md": false 00:17:11.069 }, 00:17:11.069 "memory_domains": [ 00:17:11.069 { 00:17:11.069 "dma_device_id": "system", 00:17:11.069 "dma_device_type": 1 00:17:11.069 }, 00:17:11.069 { 00:17:11.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.069 "dma_device_type": 2 00:17:11.069 } 00:17:11.069 ], 00:17:11.069 "driver_specific": {} 00:17:11.069 } 00:17:11.069 ] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.069 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.069 "name": "Existed_Raid", 00:17:11.069 "uuid": "4c2a0cf9-3239-48ad-81f9-0c125553967b", 00:17:11.069 "strip_size_kb": 0, 00:17:11.069 "state": "configuring", 00:17:11.069 "raid_level": "raid1", 00:17:11.069 "superblock": true, 00:17:11.069 "num_base_bdevs": 2, 00:17:11.069 "num_base_bdevs_discovered": 1, 00:17:11.069 "num_base_bdevs_operational": 2, 00:17:11.069 "base_bdevs_list": [ 00:17:11.069 { 00:17:11.070 "name": "BaseBdev1", 00:17:11.070 "uuid": "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1", 00:17:11.070 "is_configured": true, 00:17:11.070 "data_offset": 256, 00:17:11.070 "data_size": 7936 00:17:11.070 }, 00:17:11.070 { 00:17:11.070 "name": "BaseBdev2", 00:17:11.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.070 "is_configured": false, 00:17:11.070 "data_offset": 0, 00:17:11.070 "data_size": 0 00:17:11.070 } 00:17:11.070 ] 00:17:11.070 }' 00:17:11.070 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.070 08:54:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:11.329 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.329 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.329 [2024-09-28 08:54:49.318517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.329 [2024-09-28 08:54:49.318560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.588 [2024-09-28 08:54:49.330542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.588 [2024-09-28 08:54:49.332613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.588 [2024-09-28 08:54:49.332669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.588 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.588 "name": "Existed_Raid", 00:17:11.588 "uuid": "6a15253a-47c0-481f-9416-8c62d3c0e06b", 00:17:11.588 "strip_size_kb": 0, 00:17:11.588 "state": "configuring", 00:17:11.588 "raid_level": "raid1", 00:17:11.588 "superblock": true, 00:17:11.588 "num_base_bdevs": 2, 00:17:11.588 "num_base_bdevs_discovered": 1, 00:17:11.588 "num_base_bdevs_operational": 2, 00:17:11.588 "base_bdevs_list": [ 00:17:11.588 { 00:17:11.588 "name": "BaseBdev1", 00:17:11.588 "uuid": "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1", 00:17:11.588 "is_configured": true, 00:17:11.588 "data_offset": 256, 00:17:11.588 "data_size": 7936 00:17:11.588 }, 00:17:11.588 { 00:17:11.588 "name": "BaseBdev2", 00:17:11.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.589 "is_configured": false, 00:17:11.589 "data_offset": 0, 00:17:11.589 "data_size": 0 00:17:11.589 } 00:17:11.589 ] 00:17:11.589 }' 00:17:11.589 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.589 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.848 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:11.848 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.848 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.107 [2024-09-28 08:54:49.848032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.107 [2024-09-28 08:54:49.848300] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:12.107 [2024-09-28 08:54:49.848319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.107 BaseBdev2 00:17:12.107 [2024-09-28 08:54:49.848608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:12.107 [2024-09-28 08:54:49.848807] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:12.107 [2024-09-28 08:54:49.848821] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:12.107 [2024-09-28 08:54:49.848991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.107 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.107 [ 00:17:12.107 { 00:17:12.107 "name": "BaseBdev2", 00:17:12.107 "aliases": [ 00:17:12.107 "558d5230-6dc7-4ced-b533-7cbb0c11e232" 00:17:12.107 ], 00:17:12.107 "product_name": "Malloc disk", 00:17:12.107 "block_size": 4096, 00:17:12.107 "num_blocks": 8192, 00:17:12.107 "uuid": "558d5230-6dc7-4ced-b533-7cbb0c11e232", 00:17:12.107 "assigned_rate_limits": { 00:17:12.107 "rw_ios_per_sec": 0, 00:17:12.107 "rw_mbytes_per_sec": 0, 00:17:12.107 "r_mbytes_per_sec": 0, 00:17:12.107 "w_mbytes_per_sec": 0 00:17:12.107 }, 00:17:12.107 "claimed": true, 00:17:12.107 "claim_type": "exclusive_write", 00:17:12.107 "zoned": false, 00:17:12.107 "supported_io_types": { 00:17:12.107 "read": true, 00:17:12.107 "write": true, 00:17:12.107 "unmap": true, 00:17:12.107 "flush": true, 00:17:12.107 "reset": true, 00:17:12.107 "nvme_admin": false, 00:17:12.108 "nvme_io": false, 00:17:12.108 "nvme_io_md": false, 00:17:12.108 "write_zeroes": true, 00:17:12.108 "zcopy": true, 00:17:12.108 "get_zone_info": false, 00:17:12.108 "zone_management": false, 00:17:12.108 "zone_append": false, 00:17:12.108 "compare": false, 00:17:12.108 "compare_and_write": false, 00:17:12.108 "abort": true, 00:17:12.108 "seek_hole": false, 00:17:12.108 "seek_data": false, 00:17:12.108 "copy": true, 00:17:12.108 "nvme_iov_md": false 00:17:12.108 }, 00:17:12.108 "memory_domains": [ 00:17:12.108 { 00:17:12.108 "dma_device_id": "system", 00:17:12.108 "dma_device_type": 1 00:17:12.108 }, 00:17:12.108 { 00:17:12.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.108 "dma_device_type": 2 00:17:12.108 } 00:17:12.108 ], 00:17:12.108 "driver_specific": {} 00:17:12.108 } 00:17:12.108 ] 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.108 "name": "Existed_Raid", 00:17:12.108 "uuid": "6a15253a-47c0-481f-9416-8c62d3c0e06b", 00:17:12.108 "strip_size_kb": 0, 00:17:12.108 "state": "online", 00:17:12.108 "raid_level": "raid1", 00:17:12.108 "superblock": true, 00:17:12.108 "num_base_bdevs": 2, 00:17:12.108 "num_base_bdevs_discovered": 2, 00:17:12.108 "num_base_bdevs_operational": 2, 00:17:12.108 "base_bdevs_list": [ 00:17:12.108 { 00:17:12.108 "name": "BaseBdev1", 00:17:12.108 "uuid": "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1", 00:17:12.108 "is_configured": true, 00:17:12.108 "data_offset": 256, 00:17:12.108 "data_size": 7936 00:17:12.108 }, 00:17:12.108 { 00:17:12.108 "name": "BaseBdev2", 00:17:12.108 "uuid": "558d5230-6dc7-4ced-b533-7cbb0c11e232", 00:17:12.108 "is_configured": true, 00:17:12.108 "data_offset": 256, 00:17:12.108 "data_size": 7936 00:17:12.108 } 00:17:12.108 ] 00:17:12.108 }' 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.108 08:54:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.367 [2024-09-28 08:54:50.315643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.367 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:12.367 "name": "Existed_Raid", 00:17:12.367 "aliases": [ 00:17:12.367 "6a15253a-47c0-481f-9416-8c62d3c0e06b" 00:17:12.367 ], 00:17:12.367 "product_name": "Raid Volume", 00:17:12.367 "block_size": 4096, 00:17:12.367 "num_blocks": 7936, 00:17:12.367 "uuid": "6a15253a-47c0-481f-9416-8c62d3c0e06b", 00:17:12.367 "assigned_rate_limits": { 00:17:12.367 "rw_ios_per_sec": 0, 00:17:12.367 "rw_mbytes_per_sec": 0, 00:17:12.367 "r_mbytes_per_sec": 0, 00:17:12.367 "w_mbytes_per_sec": 0 00:17:12.367 }, 00:17:12.367 "claimed": false, 00:17:12.367 "zoned": false, 00:17:12.367 "supported_io_types": { 00:17:12.367 "read": true, 00:17:12.367 "write": true, 00:17:12.367 "unmap": false, 00:17:12.367 "flush": false, 00:17:12.367 "reset": true, 00:17:12.367 "nvme_admin": false, 00:17:12.367 "nvme_io": false, 00:17:12.367 "nvme_io_md": false, 00:17:12.367 "write_zeroes": true, 00:17:12.367 "zcopy": false, 00:17:12.367 "get_zone_info": false, 00:17:12.367 "zone_management": false, 00:17:12.367 "zone_append": false, 00:17:12.367 "compare": false, 00:17:12.367 "compare_and_write": false, 00:17:12.367 "abort": false, 00:17:12.367 "seek_hole": false, 00:17:12.367 "seek_data": false, 00:17:12.367 "copy": false, 00:17:12.367 "nvme_iov_md": false 00:17:12.367 }, 00:17:12.367 "memory_domains": [ 00:17:12.367 { 00:17:12.367 "dma_device_id": "system", 00:17:12.367 "dma_device_type": 1 00:17:12.367 }, 00:17:12.367 { 00:17:12.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.367 "dma_device_type": 2 00:17:12.367 }, 00:17:12.367 { 00:17:12.367 "dma_device_id": "system", 00:17:12.367 "dma_device_type": 1 00:17:12.367 }, 00:17:12.367 { 00:17:12.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.367 "dma_device_type": 2 00:17:12.367 } 00:17:12.367 ], 00:17:12.367 "driver_specific": { 00:17:12.367 "raid": { 00:17:12.367 "uuid": "6a15253a-47c0-481f-9416-8c62d3c0e06b", 00:17:12.367 "strip_size_kb": 0, 00:17:12.367 "state": "online", 00:17:12.367 "raid_level": "raid1", 00:17:12.367 "superblock": true, 00:17:12.367 "num_base_bdevs": 2, 00:17:12.367 "num_base_bdevs_discovered": 2, 00:17:12.367 "num_base_bdevs_operational": 2, 00:17:12.367 "base_bdevs_list": [ 00:17:12.367 { 00:17:12.367 "name": "BaseBdev1", 00:17:12.367 "uuid": "136fe2f2-31a9-4898-9bfd-e4f4e662d3e1", 00:17:12.368 "is_configured": true, 00:17:12.368 "data_offset": 256, 00:17:12.368 "data_size": 7936 00:17:12.368 }, 00:17:12.368 { 00:17:12.368 "name": "BaseBdev2", 00:17:12.368 "uuid": "558d5230-6dc7-4ced-b533-7cbb0c11e232", 00:17:12.368 "is_configured": true, 00:17:12.368 "data_offset": 256, 00:17:12.368 "data_size": 7936 00:17:12.368 } 00:17:12.368 ] 00:17:12.368 } 00:17:12.368 } 00:17:12.368 }' 00:17:12.368 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:12.627 BaseBdev2' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.627 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.628 [2024-09-28 08:54:50.503123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.628 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.887 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.887 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.887 "name": "Existed_Raid", 00:17:12.887 "uuid": "6a15253a-47c0-481f-9416-8c62d3c0e06b", 00:17:12.887 "strip_size_kb": 0, 00:17:12.887 "state": "online", 00:17:12.887 "raid_level": "raid1", 00:17:12.887 "superblock": true, 00:17:12.887 "num_base_bdevs": 2, 00:17:12.887 "num_base_bdevs_discovered": 1, 00:17:12.887 "num_base_bdevs_operational": 1, 00:17:12.887 "base_bdevs_list": [ 00:17:12.887 { 00:17:12.887 "name": null, 00:17:12.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.887 "is_configured": false, 00:17:12.887 "data_offset": 0, 00:17:12.887 "data_size": 7936 00:17:12.887 }, 00:17:12.887 { 00:17:12.887 "name": "BaseBdev2", 00:17:12.887 "uuid": "558d5230-6dc7-4ced-b533-7cbb0c11e232", 00:17:12.887 "is_configured": true, 00:17:12.887 "data_offset": 256, 00:17:12.887 "data_size": 7936 00:17:12.887 } 00:17:12.887 ] 00:17:12.887 }' 00:17:12.887 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.887 08:54:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.147 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.147 [2024-09-28 08:54:51.140560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:13.147 [2024-09-28 08:54:51.140696] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.407 [2024-09-28 08:54:51.240803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.407 [2024-09-28 08:54:51.240858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.407 [2024-09-28 08:54:51.240870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85952 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85952 ']' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85952 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85952 00:17:13.407 killing process with pid 85952 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85952' 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85952 00:17:13.407 [2024-09-28 08:54:51.320072] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.407 08:54:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85952 00:17:13.407 [2024-09-28 08:54:51.336830] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.789 08:54:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:14.789 00:17:14.789 real 0m5.259s 00:17:14.789 user 0m7.323s 00:17:14.789 sys 0m0.999s 00:17:14.789 08:54:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.789 08:54:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.789 ************************************ 00:17:14.789 END TEST raid_state_function_test_sb_4k 00:17:14.789 ************************************ 00:17:14.789 08:54:52 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:14.789 08:54:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:14.789 08:54:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.789 08:54:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.789 ************************************ 00:17:14.789 START TEST raid_superblock_test_4k 00:17:14.789 ************************************ 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86206 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86206 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86206 ']' 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.789 08:54:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.049 [2024-09-28 08:54:52.828922] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:15.049 [2024-09-28 08:54:52.829056] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86206 ] 00:17:15.049 [2024-09-28 08:54:52.994305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.309 [2024-09-28 08:54:53.229158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.569 [2024-09-28 08:54:53.465942] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.569 [2024-09-28 08:54:53.465973] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.829 malloc1 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.829 [2024-09-28 08:54:53.703862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.829 [2024-09-28 08:54:53.703937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.829 [2024-09-28 08:54:53.703979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:15.829 [2024-09-28 08:54:53.703992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.829 [2024-09-28 08:54:53.706320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.829 [2024-09-28 08:54:53.706354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.829 pt1 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.829 malloc2 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.829 [2024-09-28 08:54:53.789277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.829 [2024-09-28 08:54:53.789335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.829 [2024-09-28 08:54:53.789376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:15.829 [2024-09-28 08:54:53.789385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.829 [2024-09-28 08:54:53.791738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.829 [2024-09-28 08:54:53.791771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.829 pt2 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.829 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.829 [2024-09-28 08:54:53.801322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:15.829 [2024-09-28 08:54:53.803297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.829 [2024-09-28 08:54:53.803498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:15.830 [2024-09-28 08:54:53.803510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.830 [2024-09-28 08:54:53.803750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:15.830 [2024-09-28 08:54:53.803920] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:15.830 [2024-09-28 08:54:53.803938] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:15.830 [2024-09-28 08:54:53.804087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.830 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.089 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.089 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.089 "name": "raid_bdev1", 00:17:16.089 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:16.089 "strip_size_kb": 0, 00:17:16.089 "state": "online", 00:17:16.089 "raid_level": "raid1", 00:17:16.089 "superblock": true, 00:17:16.089 "num_base_bdevs": 2, 00:17:16.089 "num_base_bdevs_discovered": 2, 00:17:16.089 "num_base_bdevs_operational": 2, 00:17:16.089 "base_bdevs_list": [ 00:17:16.089 { 00:17:16.089 "name": "pt1", 00:17:16.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.089 "is_configured": true, 00:17:16.089 "data_offset": 256, 00:17:16.089 "data_size": 7936 00:17:16.089 }, 00:17:16.089 { 00:17:16.089 "name": "pt2", 00:17:16.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.089 "is_configured": true, 00:17:16.089 "data_offset": 256, 00:17:16.089 "data_size": 7936 00:17:16.089 } 00:17:16.089 ] 00:17:16.089 }' 00:17:16.089 08:54:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.089 08:54:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.349 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.349 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:16.349 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.350 [2024-09-28 08:54:54.228904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.350 "name": "raid_bdev1", 00:17:16.350 "aliases": [ 00:17:16.350 "da103670-c2d9-4406-89db-f514c16bbce7" 00:17:16.350 ], 00:17:16.350 "product_name": "Raid Volume", 00:17:16.350 "block_size": 4096, 00:17:16.350 "num_blocks": 7936, 00:17:16.350 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:16.350 "assigned_rate_limits": { 00:17:16.350 "rw_ios_per_sec": 0, 00:17:16.350 "rw_mbytes_per_sec": 0, 00:17:16.350 "r_mbytes_per_sec": 0, 00:17:16.350 "w_mbytes_per_sec": 0 00:17:16.350 }, 00:17:16.350 "claimed": false, 00:17:16.350 "zoned": false, 00:17:16.350 "supported_io_types": { 00:17:16.350 "read": true, 00:17:16.350 "write": true, 00:17:16.350 "unmap": false, 00:17:16.350 "flush": false, 00:17:16.350 "reset": true, 00:17:16.350 "nvme_admin": false, 00:17:16.350 "nvme_io": false, 00:17:16.350 "nvme_io_md": false, 00:17:16.350 "write_zeroes": true, 00:17:16.350 "zcopy": false, 00:17:16.350 "get_zone_info": false, 00:17:16.350 "zone_management": false, 00:17:16.350 "zone_append": false, 00:17:16.350 "compare": false, 00:17:16.350 "compare_and_write": false, 00:17:16.350 "abort": false, 00:17:16.350 "seek_hole": false, 00:17:16.350 "seek_data": false, 00:17:16.350 "copy": false, 00:17:16.350 "nvme_iov_md": false 00:17:16.350 }, 00:17:16.350 "memory_domains": [ 00:17:16.350 { 00:17:16.350 "dma_device_id": "system", 00:17:16.350 "dma_device_type": 1 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.350 "dma_device_type": 2 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "dma_device_id": "system", 00:17:16.350 "dma_device_type": 1 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.350 "dma_device_type": 2 00:17:16.350 } 00:17:16.350 ], 00:17:16.350 "driver_specific": { 00:17:16.350 "raid": { 00:17:16.350 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:16.350 "strip_size_kb": 0, 00:17:16.350 "state": "online", 00:17:16.350 "raid_level": "raid1", 00:17:16.350 "superblock": true, 00:17:16.350 "num_base_bdevs": 2, 00:17:16.350 "num_base_bdevs_discovered": 2, 00:17:16.350 "num_base_bdevs_operational": 2, 00:17:16.350 "base_bdevs_list": [ 00:17:16.350 { 00:17:16.350 "name": "pt1", 00:17:16.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.350 "is_configured": true, 00:17:16.350 "data_offset": 256, 00:17:16.350 "data_size": 7936 00:17:16.350 }, 00:17:16.350 { 00:17:16.350 "name": "pt2", 00:17:16.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.350 "is_configured": true, 00:17:16.350 "data_offset": 256, 00:17:16.350 "data_size": 7936 00:17:16.350 } 00:17:16.350 ] 00:17:16.350 } 00:17:16.350 } 00:17:16.350 }' 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:16.350 pt2' 00:17:16.350 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:16.610 [2024-09-28 08:54:54.452444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=da103670-c2d9-4406-89db-f514c16bbce7 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z da103670-c2d9-4406-89db-f514c16bbce7 ']' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 [2024-09-28 08:54:54.480184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.610 [2024-09-28 08:54:54.480208] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.610 [2024-09-28 08:54:54.480272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.610 [2024-09-28 08:54:54.480320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.610 [2024-09-28 08:54:54.480333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:16.611 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:16.870 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.871 [2024-09-28 08:54:54.611960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:16.871 [2024-09-28 08:54:54.614033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:16.871 [2024-09-28 08:54:54.614111] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:16.871 [2024-09-28 08:54:54.614155] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:16.871 [2024-09-28 08:54:54.614168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.871 [2024-09-28 08:54:54.614178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:16.871 request: 00:17:16.871 { 00:17:16.871 "name": "raid_bdev1", 00:17:16.871 "raid_level": "raid1", 00:17:16.871 "base_bdevs": [ 00:17:16.871 "malloc1", 00:17:16.871 "malloc2" 00:17:16.871 ], 00:17:16.871 "superblock": false, 00:17:16.871 "method": "bdev_raid_create", 00:17:16.871 "req_id": 1 00:17:16.871 } 00:17:16.871 Got JSON-RPC error response 00:17:16.871 response: 00:17:16.871 { 00:17:16.871 "code": -17, 00:17:16.871 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:16.871 } 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.871 [2024-09-28 08:54:54.675826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.871 [2024-09-28 08:54:54.675869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.871 [2024-09-28 08:54:54.675883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:16.871 [2024-09-28 08:54:54.675909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.871 [2024-09-28 08:54:54.678188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.871 [2024-09-28 08:54:54.678222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.871 [2024-09-28 08:54:54.678298] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:16.871 [2024-09-28 08:54:54.678351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.871 pt1 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.871 "name": "raid_bdev1", 00:17:16.871 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:16.871 "strip_size_kb": 0, 00:17:16.871 "state": "configuring", 00:17:16.871 "raid_level": "raid1", 00:17:16.871 "superblock": true, 00:17:16.871 "num_base_bdevs": 2, 00:17:16.871 "num_base_bdevs_discovered": 1, 00:17:16.871 "num_base_bdevs_operational": 2, 00:17:16.871 "base_bdevs_list": [ 00:17:16.871 { 00:17:16.871 "name": "pt1", 00:17:16.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.871 "is_configured": true, 00:17:16.871 "data_offset": 256, 00:17:16.871 "data_size": 7936 00:17:16.871 }, 00:17:16.871 { 00:17:16.871 "name": null, 00:17:16.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.871 "is_configured": false, 00:17:16.871 "data_offset": 256, 00:17:16.871 "data_size": 7936 00:17:16.871 } 00:17:16.871 ] 00:17:16.871 }' 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.871 08:54:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.441 [2024-09-28 08:54:55.147216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.441 [2024-09-28 08:54:55.147266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.441 [2024-09-28 08:54:55.147298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:17.441 [2024-09-28 08:54:55.147308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.441 [2024-09-28 08:54:55.147716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.441 [2024-09-28 08:54:55.147748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.441 [2024-09-28 08:54:55.147804] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.441 [2024-09-28 08:54:55.147825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.441 [2024-09-28 08:54:55.147933] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:17.441 [2024-09-28 08:54:55.147949] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:17.441 [2024-09-28 08:54:55.148181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:17.441 [2024-09-28 08:54:55.148345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:17.441 [2024-09-28 08:54:55.148364] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:17.441 [2024-09-28 08:54:55.148494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.441 pt2 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.441 "name": "raid_bdev1", 00:17:17.441 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:17.441 "strip_size_kb": 0, 00:17:17.441 "state": "online", 00:17:17.441 "raid_level": "raid1", 00:17:17.441 "superblock": true, 00:17:17.441 "num_base_bdevs": 2, 00:17:17.441 "num_base_bdevs_discovered": 2, 00:17:17.441 "num_base_bdevs_operational": 2, 00:17:17.441 "base_bdevs_list": [ 00:17:17.441 { 00:17:17.441 "name": "pt1", 00:17:17.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.441 "is_configured": true, 00:17:17.441 "data_offset": 256, 00:17:17.441 "data_size": 7936 00:17:17.441 }, 00:17:17.441 { 00:17:17.441 "name": "pt2", 00:17:17.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.441 "is_configured": true, 00:17:17.441 "data_offset": 256, 00:17:17.441 "data_size": 7936 00:17:17.441 } 00:17:17.441 ] 00:17:17.441 }' 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.441 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.701 [2024-09-28 08:54:55.634589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.701 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.701 "name": "raid_bdev1", 00:17:17.701 "aliases": [ 00:17:17.701 "da103670-c2d9-4406-89db-f514c16bbce7" 00:17:17.701 ], 00:17:17.701 "product_name": "Raid Volume", 00:17:17.701 "block_size": 4096, 00:17:17.701 "num_blocks": 7936, 00:17:17.701 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:17.701 "assigned_rate_limits": { 00:17:17.701 "rw_ios_per_sec": 0, 00:17:17.701 "rw_mbytes_per_sec": 0, 00:17:17.701 "r_mbytes_per_sec": 0, 00:17:17.701 "w_mbytes_per_sec": 0 00:17:17.701 }, 00:17:17.701 "claimed": false, 00:17:17.701 "zoned": false, 00:17:17.701 "supported_io_types": { 00:17:17.701 "read": true, 00:17:17.701 "write": true, 00:17:17.701 "unmap": false, 00:17:17.701 "flush": false, 00:17:17.701 "reset": true, 00:17:17.701 "nvme_admin": false, 00:17:17.701 "nvme_io": false, 00:17:17.701 "nvme_io_md": false, 00:17:17.701 "write_zeroes": true, 00:17:17.701 "zcopy": false, 00:17:17.701 "get_zone_info": false, 00:17:17.701 "zone_management": false, 00:17:17.701 "zone_append": false, 00:17:17.701 "compare": false, 00:17:17.701 "compare_and_write": false, 00:17:17.701 "abort": false, 00:17:17.701 "seek_hole": false, 00:17:17.701 "seek_data": false, 00:17:17.701 "copy": false, 00:17:17.701 "nvme_iov_md": false 00:17:17.701 }, 00:17:17.701 "memory_domains": [ 00:17:17.701 { 00:17:17.701 "dma_device_id": "system", 00:17:17.701 "dma_device_type": 1 00:17:17.701 }, 00:17:17.701 { 00:17:17.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.701 "dma_device_type": 2 00:17:17.701 }, 00:17:17.701 { 00:17:17.701 "dma_device_id": "system", 00:17:17.701 "dma_device_type": 1 00:17:17.701 }, 00:17:17.701 { 00:17:17.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.701 "dma_device_type": 2 00:17:17.701 } 00:17:17.701 ], 00:17:17.701 "driver_specific": { 00:17:17.701 "raid": { 00:17:17.701 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:17.701 "strip_size_kb": 0, 00:17:17.701 "state": "online", 00:17:17.701 "raid_level": "raid1", 00:17:17.701 "superblock": true, 00:17:17.701 "num_base_bdevs": 2, 00:17:17.701 "num_base_bdevs_discovered": 2, 00:17:17.701 "num_base_bdevs_operational": 2, 00:17:17.701 "base_bdevs_list": [ 00:17:17.701 { 00:17:17.701 "name": "pt1", 00:17:17.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.701 "is_configured": true, 00:17:17.701 "data_offset": 256, 00:17:17.701 "data_size": 7936 00:17:17.701 }, 00:17:17.701 { 00:17:17.701 "name": "pt2", 00:17:17.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.701 "is_configured": true, 00:17:17.701 "data_offset": 256, 00:17:17.701 "data_size": 7936 00:17:17.701 } 00:17:17.701 ] 00:17:17.701 } 00:17:17.702 } 00:17:17.702 }' 00:17:17.702 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:17.962 pt2' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.962 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 [2024-09-28 08:54:55.858192] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' da103670-c2d9-4406-89db-f514c16bbce7 '!=' da103670-c2d9-4406-89db-f514c16bbce7 ']' 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 [2024-09-28 08:54:55.905945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.963 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.223 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.223 "name": "raid_bdev1", 00:17:18.223 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:18.223 "strip_size_kb": 0, 00:17:18.223 "state": "online", 00:17:18.223 "raid_level": "raid1", 00:17:18.223 "superblock": true, 00:17:18.223 "num_base_bdevs": 2, 00:17:18.223 "num_base_bdevs_discovered": 1, 00:17:18.223 "num_base_bdevs_operational": 1, 00:17:18.223 "base_bdevs_list": [ 00:17:18.223 { 00:17:18.223 "name": null, 00:17:18.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.223 "is_configured": false, 00:17:18.223 "data_offset": 0, 00:17:18.223 "data_size": 7936 00:17:18.223 }, 00:17:18.223 { 00:17:18.223 "name": "pt2", 00:17:18.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.223 "is_configured": true, 00:17:18.223 "data_offset": 256, 00:17:18.223 "data_size": 7936 00:17:18.223 } 00:17:18.223 ] 00:17:18.223 }' 00:17:18.223 08:54:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.223 08:54:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 [2024-09-28 08:54:56.369124] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.483 [2024-09-28 08:54:56.369146] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.483 [2024-09-28 08:54:56.369193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.483 [2024-09-28 08:54:56.369225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.483 [2024-09-28 08:54:56.369236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 [2024-09-28 08:54:56.441014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.483 [2024-09-28 08:54:56.441059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.483 [2024-09-28 08:54:56.441072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:18.483 [2024-09-28 08:54:56.441083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.483 [2024-09-28 08:54:56.443411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.483 [2024-09-28 08:54:56.443465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.483 [2024-09-28 08:54:56.443526] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.483 [2024-09-28 08:54:56.443574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.483 [2024-09-28 08:54:56.443657] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:18.483 [2024-09-28 08:54:56.443681] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.483 [2024-09-28 08:54:56.443886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:18.483 [2024-09-28 08:54:56.444039] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:18.483 [2024-09-28 08:54:56.444051] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:18.483 [2024-09-28 08:54:56.444174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.483 pt2 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.483 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.743 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.743 "name": "raid_bdev1", 00:17:18.743 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:18.743 "strip_size_kb": 0, 00:17:18.743 "state": "online", 00:17:18.743 "raid_level": "raid1", 00:17:18.743 "superblock": true, 00:17:18.743 "num_base_bdevs": 2, 00:17:18.743 "num_base_bdevs_discovered": 1, 00:17:18.743 "num_base_bdevs_operational": 1, 00:17:18.743 "base_bdevs_list": [ 00:17:18.743 { 00:17:18.743 "name": null, 00:17:18.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.743 "is_configured": false, 00:17:18.743 "data_offset": 256, 00:17:18.743 "data_size": 7936 00:17:18.743 }, 00:17:18.743 { 00:17:18.743 "name": "pt2", 00:17:18.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.743 "is_configured": true, 00:17:18.743 "data_offset": 256, 00:17:18.743 "data_size": 7936 00:17:18.743 } 00:17:18.743 ] 00:17:18.743 }' 00:17:18.743 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.743 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.003 [2024-09-28 08:54:56.796370] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.003 [2024-09-28 08:54:56.796396] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.003 [2024-09-28 08:54:56.796443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.003 [2024-09-28 08:54:56.796480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.003 [2024-09-28 08:54:56.796487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.003 [2024-09-28 08:54:56.856283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.003 [2024-09-28 08:54:56.856325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.003 [2024-09-28 08:54:56.856357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:19.003 [2024-09-28 08:54:56.856365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.003 [2024-09-28 08:54:56.858677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.003 [2024-09-28 08:54:56.858707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.003 [2024-09-28 08:54:56.858787] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.003 [2024-09-28 08:54:56.858830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.003 [2024-09-28 08:54:56.858956] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.003 [2024-09-28 08:54:56.858972] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.003 [2024-09-28 08:54:56.858989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:19.003 [2024-09-28 08:54:56.859044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.003 [2024-09-28 08:54:56.859115] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.003 [2024-09-28 08:54:56.859126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.003 [2024-09-28 08:54:56.859340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:19.003 [2024-09-28 08:54:56.859509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.003 [2024-09-28 08:54:56.859549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.003 [2024-09-28 08:54:56.859709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.003 pt1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.003 "name": "raid_bdev1", 00:17:19.003 "uuid": "da103670-c2d9-4406-89db-f514c16bbce7", 00:17:19.003 "strip_size_kb": 0, 00:17:19.003 "state": "online", 00:17:19.003 "raid_level": "raid1", 00:17:19.003 "superblock": true, 00:17:19.003 "num_base_bdevs": 2, 00:17:19.003 "num_base_bdevs_discovered": 1, 00:17:19.003 "num_base_bdevs_operational": 1, 00:17:19.003 "base_bdevs_list": [ 00:17:19.003 { 00:17:19.003 "name": null, 00:17:19.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.003 "is_configured": false, 00:17:19.003 "data_offset": 256, 00:17:19.003 "data_size": 7936 00:17:19.003 }, 00:17:19.003 { 00:17:19.003 "name": "pt2", 00:17:19.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.003 "is_configured": true, 00:17:19.003 "data_offset": 256, 00:17:19.003 "data_size": 7936 00:17:19.003 } 00:17:19.003 ] 00:17:19.003 }' 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.003 08:54:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.574 [2024-09-28 08:54:57.371631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' da103670-c2d9-4406-89db-f514c16bbce7 '!=' da103670-c2d9-4406-89db-f514c16bbce7 ']' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86206 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86206 ']' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86206 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86206 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:19.574 killing process with pid 86206 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86206' 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86206 00:17:19.574 [2024-09-28 08:54:57.435310] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.574 [2024-09-28 08:54:57.435368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.574 [2024-09-28 08:54:57.435403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.574 [2024-09-28 08:54:57.435418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:19.574 08:54:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86206 00:17:19.834 [2024-09-28 08:54:57.648768] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.217 08:54:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:21.217 00:17:21.217 real 0m6.229s 00:17:21.217 user 0m9.138s 00:17:21.217 sys 0m1.210s 00:17:21.217 08:54:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.217 08:54:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.217 ************************************ 00:17:21.217 END TEST raid_superblock_test_4k 00:17:21.217 ************************************ 00:17:21.217 08:54:59 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:21.217 08:54:59 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:21.217 08:54:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:21.217 08:54:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.217 08:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.217 ************************************ 00:17:21.217 START TEST raid_rebuild_test_sb_4k 00:17:21.217 ************************************ 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86534 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86534 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86534 ']' 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.217 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.217 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.217 Zero copy mechanism will not be used. 00:17:21.217 [2024-09-28 08:54:59.140631] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:21.217 [2024-09-28 08:54:59.140759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86534 ] 00:17:21.478 [2024-09-28 08:54:59.305599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.737 [2024-09-28 08:54:59.551529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.997 [2024-09-28 08:54:59.779453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.997 [2024-09-28 08:54:59.779505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.997 08:54:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.257 BaseBdev1_malloc 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.257 [2024-09-28 08:55:00.016311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.257 [2024-09-28 08:55:00.016404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.257 [2024-09-28 08:55:00.016431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.257 [2024-09-28 08:55:00.016447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.257 [2024-09-28 08:55:00.018834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.257 [2024-09-28 08:55:00.018872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.257 BaseBdev1 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.257 BaseBdev2_malloc 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.257 [2024-09-28 08:55:00.105184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:22.257 [2024-09-28 08:55:00.105248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.257 [2024-09-28 08:55:00.105286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.257 [2024-09-28 08:55:00.105300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.257 [2024-09-28 08:55:00.107578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.257 [2024-09-28 08:55:00.107614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.257 BaseBdev2 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.257 spare_malloc 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:22.257 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 spare_delay 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 [2024-09-28 08:55:00.173745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.258 [2024-09-28 08:55:00.173803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.258 [2024-09-28 08:55:00.173838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:22.258 [2024-09-28 08:55:00.173848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.258 [2024-09-28 08:55:00.176161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.258 [2024-09-28 08:55:00.176198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.258 spare 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 [2024-09-28 08:55:00.185778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.258 [2024-09-28 08:55:00.187745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.258 [2024-09-28 08:55:00.187925] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.258 [2024-09-28 08:55:00.187940] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.258 [2024-09-28 08:55:00.188179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:22.258 [2024-09-28 08:55:00.188352] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.258 [2024-09-28 08:55:00.188367] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.258 [2024-09-28 08:55:00.188499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.258 "name": "raid_bdev1", 00:17:22.258 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:22.258 "strip_size_kb": 0, 00:17:22.258 "state": "online", 00:17:22.258 "raid_level": "raid1", 00:17:22.258 "superblock": true, 00:17:22.258 "num_base_bdevs": 2, 00:17:22.258 "num_base_bdevs_discovered": 2, 00:17:22.258 "num_base_bdevs_operational": 2, 00:17:22.258 "base_bdevs_list": [ 00:17:22.258 { 00:17:22.258 "name": "BaseBdev1", 00:17:22.258 "uuid": "181766a4-0ba6-5d21-846c-d85335c9b767", 00:17:22.258 "is_configured": true, 00:17:22.258 "data_offset": 256, 00:17:22.258 "data_size": 7936 00:17:22.258 }, 00:17:22.258 { 00:17:22.258 "name": "BaseBdev2", 00:17:22.258 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:22.258 "is_configured": true, 00:17:22.258 "data_offset": 256, 00:17:22.258 "data_size": 7936 00:17:22.258 } 00:17:22.258 ] 00:17:22.258 }' 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.258 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.827 [2024-09-28 08:55:00.685075] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:22.827 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:23.087 [2024-09-28 08:55:00.936561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:23.087 /dev/nbd0 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.087 1+0 records in 00:17:23.087 1+0 records out 00:17:23.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358211 s, 11.4 MB/s 00:17:23.087 08:55:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:23.087 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:23.656 7936+0 records in 00:17:23.656 7936+0 records out 00:17:23.656 32505856 bytes (33 MB, 31 MiB) copied, 0.605518 s, 53.7 MB/s 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.656 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.915 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.916 [2024-09-28 08:55:01.814791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 [2024-09-28 08:55:01.832513] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.916 "name": "raid_bdev1", 00:17:23.916 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:23.916 "strip_size_kb": 0, 00:17:23.916 "state": "online", 00:17:23.916 "raid_level": "raid1", 00:17:23.916 "superblock": true, 00:17:23.916 "num_base_bdevs": 2, 00:17:23.916 "num_base_bdevs_discovered": 1, 00:17:23.916 "num_base_bdevs_operational": 1, 00:17:23.916 "base_bdevs_list": [ 00:17:23.916 { 00:17:23.916 "name": null, 00:17:23.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.916 "is_configured": false, 00:17:23.916 "data_offset": 0, 00:17:23.916 "data_size": 7936 00:17:23.916 }, 00:17:23.916 { 00:17:23.916 "name": "BaseBdev2", 00:17:23.916 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:23.916 "is_configured": true, 00:17:23.916 "data_offset": 256, 00:17:23.916 "data_size": 7936 00:17:23.916 } 00:17:23.916 ] 00:17:23.916 }' 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.916 08:55:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.492 08:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.492 08:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.492 08:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.492 [2024-09-28 08:55:02.307741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.492 [2024-09-28 08:55:02.322197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:24.492 08:55:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.492 08:55:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:24.492 [2024-09-28 08:55:02.324268] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.433 "name": "raid_bdev1", 00:17:25.433 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:25.433 "strip_size_kb": 0, 00:17:25.433 "state": "online", 00:17:25.433 "raid_level": "raid1", 00:17:25.433 "superblock": true, 00:17:25.433 "num_base_bdevs": 2, 00:17:25.433 "num_base_bdevs_discovered": 2, 00:17:25.433 "num_base_bdevs_operational": 2, 00:17:25.433 "process": { 00:17:25.433 "type": "rebuild", 00:17:25.433 "target": "spare", 00:17:25.433 "progress": { 00:17:25.433 "blocks": 2560, 00:17:25.433 "percent": 32 00:17:25.433 } 00:17:25.433 }, 00:17:25.433 "base_bdevs_list": [ 00:17:25.433 { 00:17:25.433 "name": "spare", 00:17:25.433 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:25.433 "is_configured": true, 00:17:25.433 "data_offset": 256, 00:17:25.433 "data_size": 7936 00:17:25.433 }, 00:17:25.433 { 00:17:25.433 "name": "BaseBdev2", 00:17:25.433 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:25.433 "is_configured": true, 00:17:25.433 "data_offset": 256, 00:17:25.433 "data_size": 7936 00:17:25.433 } 00:17:25.433 ] 00:17:25.433 }' 00:17:25.433 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.693 [2024-09-28 08:55:03.488177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.693 [2024-09-28 08:55:03.532938] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.693 [2024-09-28 08:55:03.533016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.693 [2024-09-28 08:55:03.533031] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.693 [2024-09-28 08:55:03.533043] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.693 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.694 "name": "raid_bdev1", 00:17:25.694 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:25.694 "strip_size_kb": 0, 00:17:25.694 "state": "online", 00:17:25.694 "raid_level": "raid1", 00:17:25.694 "superblock": true, 00:17:25.694 "num_base_bdevs": 2, 00:17:25.694 "num_base_bdevs_discovered": 1, 00:17:25.694 "num_base_bdevs_operational": 1, 00:17:25.694 "base_bdevs_list": [ 00:17:25.694 { 00:17:25.694 "name": null, 00:17:25.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.694 "is_configured": false, 00:17:25.694 "data_offset": 0, 00:17:25.694 "data_size": 7936 00:17:25.694 }, 00:17:25.694 { 00:17:25.694 "name": "BaseBdev2", 00:17:25.694 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:25.694 "is_configured": true, 00:17:25.694 "data_offset": 256, 00:17:25.694 "data_size": 7936 00:17:25.694 } 00:17:25.694 ] 00:17:25.694 }' 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.694 08:55:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.262 "name": "raid_bdev1", 00:17:26.262 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:26.262 "strip_size_kb": 0, 00:17:26.262 "state": "online", 00:17:26.262 "raid_level": "raid1", 00:17:26.262 "superblock": true, 00:17:26.262 "num_base_bdevs": 2, 00:17:26.262 "num_base_bdevs_discovered": 1, 00:17:26.262 "num_base_bdevs_operational": 1, 00:17:26.262 "base_bdevs_list": [ 00:17:26.262 { 00:17:26.262 "name": null, 00:17:26.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.262 "is_configured": false, 00:17:26.262 "data_offset": 0, 00:17:26.262 "data_size": 7936 00:17:26.262 }, 00:17:26.262 { 00:17:26.262 "name": "BaseBdev2", 00:17:26.262 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:26.262 "is_configured": true, 00:17:26.262 "data_offset": 256, 00:17:26.262 "data_size": 7936 00:17:26.262 } 00:17:26.262 ] 00:17:26.262 }' 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.262 [2024-09-28 08:55:04.157287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.262 [2024-09-28 08:55:04.172191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.262 08:55:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:26.262 [2024-09-28 08:55:04.174218] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.198 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.457 "name": "raid_bdev1", 00:17:27.457 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:27.457 "strip_size_kb": 0, 00:17:27.457 "state": "online", 00:17:27.457 "raid_level": "raid1", 00:17:27.457 "superblock": true, 00:17:27.457 "num_base_bdevs": 2, 00:17:27.457 "num_base_bdevs_discovered": 2, 00:17:27.457 "num_base_bdevs_operational": 2, 00:17:27.457 "process": { 00:17:27.457 "type": "rebuild", 00:17:27.457 "target": "spare", 00:17:27.457 "progress": { 00:17:27.457 "blocks": 2560, 00:17:27.457 "percent": 32 00:17:27.457 } 00:17:27.457 }, 00:17:27.457 "base_bdevs_list": [ 00:17:27.457 { 00:17:27.457 "name": "spare", 00:17:27.457 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:27.457 "is_configured": true, 00:17:27.457 "data_offset": 256, 00:17:27.457 "data_size": 7936 00:17:27.457 }, 00:17:27.457 { 00:17:27.457 "name": "BaseBdev2", 00:17:27.457 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:27.457 "is_configured": true, 00:17:27.457 "data_offset": 256, 00:17:27.457 "data_size": 7936 00:17:27.457 } 00:17:27.457 ] 00:17:27.457 }' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:27.457 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=685 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.457 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.458 "name": "raid_bdev1", 00:17:27.458 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:27.458 "strip_size_kb": 0, 00:17:27.458 "state": "online", 00:17:27.458 "raid_level": "raid1", 00:17:27.458 "superblock": true, 00:17:27.458 "num_base_bdevs": 2, 00:17:27.458 "num_base_bdevs_discovered": 2, 00:17:27.458 "num_base_bdevs_operational": 2, 00:17:27.458 "process": { 00:17:27.458 "type": "rebuild", 00:17:27.458 "target": "spare", 00:17:27.458 "progress": { 00:17:27.458 "blocks": 2816, 00:17:27.458 "percent": 35 00:17:27.458 } 00:17:27.458 }, 00:17:27.458 "base_bdevs_list": [ 00:17:27.458 { 00:17:27.458 "name": "spare", 00:17:27.458 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:27.458 "is_configured": true, 00:17:27.458 "data_offset": 256, 00:17:27.458 "data_size": 7936 00:17:27.458 }, 00:17:27.458 { 00:17:27.458 "name": "BaseBdev2", 00:17:27.458 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:27.458 "is_configured": true, 00:17:27.458 "data_offset": 256, 00:17:27.458 "data_size": 7936 00:17:27.458 } 00:17:27.458 ] 00:17:27.458 }' 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.458 08:55:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.836 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.836 "name": "raid_bdev1", 00:17:28.836 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:28.836 "strip_size_kb": 0, 00:17:28.836 "state": "online", 00:17:28.836 "raid_level": "raid1", 00:17:28.836 "superblock": true, 00:17:28.836 "num_base_bdevs": 2, 00:17:28.836 "num_base_bdevs_discovered": 2, 00:17:28.836 "num_base_bdevs_operational": 2, 00:17:28.836 "process": { 00:17:28.836 "type": "rebuild", 00:17:28.836 "target": "spare", 00:17:28.836 "progress": { 00:17:28.837 "blocks": 5632, 00:17:28.837 "percent": 70 00:17:28.837 } 00:17:28.837 }, 00:17:28.837 "base_bdevs_list": [ 00:17:28.837 { 00:17:28.837 "name": "spare", 00:17:28.837 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:28.837 "is_configured": true, 00:17:28.837 "data_offset": 256, 00:17:28.837 "data_size": 7936 00:17:28.837 }, 00:17:28.837 { 00:17:28.837 "name": "BaseBdev2", 00:17:28.837 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:28.837 "is_configured": true, 00:17:28.837 "data_offset": 256, 00:17:28.837 "data_size": 7936 00:17:28.837 } 00:17:28.837 ] 00:17:28.837 }' 00:17:28.837 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.837 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.837 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.837 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.837 08:55:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.405 [2024-09-28 08:55:07.295776] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:29.405 [2024-09-28 08:55:07.295862] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:29.405 [2024-09-28 08:55:07.295984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.664 "name": "raid_bdev1", 00:17:29.664 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:29.664 "strip_size_kb": 0, 00:17:29.664 "state": "online", 00:17:29.664 "raid_level": "raid1", 00:17:29.664 "superblock": true, 00:17:29.664 "num_base_bdevs": 2, 00:17:29.664 "num_base_bdevs_discovered": 2, 00:17:29.664 "num_base_bdevs_operational": 2, 00:17:29.664 "base_bdevs_list": [ 00:17:29.664 { 00:17:29.664 "name": "spare", 00:17:29.664 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:29.664 "is_configured": true, 00:17:29.664 "data_offset": 256, 00:17:29.664 "data_size": 7936 00:17:29.664 }, 00:17:29.664 { 00:17:29.664 "name": "BaseBdev2", 00:17:29.664 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:29.664 "is_configured": true, 00:17:29.664 "data_offset": 256, 00:17:29.664 "data_size": 7936 00:17:29.664 } 00:17:29.664 ] 00:17:29.664 }' 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:29.664 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.923 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:29.923 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:29.923 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.923 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.924 "name": "raid_bdev1", 00:17:29.924 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:29.924 "strip_size_kb": 0, 00:17:29.924 "state": "online", 00:17:29.924 "raid_level": "raid1", 00:17:29.924 "superblock": true, 00:17:29.924 "num_base_bdevs": 2, 00:17:29.924 "num_base_bdevs_discovered": 2, 00:17:29.924 "num_base_bdevs_operational": 2, 00:17:29.924 "base_bdevs_list": [ 00:17:29.924 { 00:17:29.924 "name": "spare", 00:17:29.924 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:29.924 "is_configured": true, 00:17:29.924 "data_offset": 256, 00:17:29.924 "data_size": 7936 00:17:29.924 }, 00:17:29.924 { 00:17:29.924 "name": "BaseBdev2", 00:17:29.924 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:29.924 "is_configured": true, 00:17:29.924 "data_offset": 256, 00:17:29.924 "data_size": 7936 00:17:29.924 } 00:17:29.924 ] 00:17:29.924 }' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.924 "name": "raid_bdev1", 00:17:29.924 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:29.924 "strip_size_kb": 0, 00:17:29.924 "state": "online", 00:17:29.924 "raid_level": "raid1", 00:17:29.924 "superblock": true, 00:17:29.924 "num_base_bdevs": 2, 00:17:29.924 "num_base_bdevs_discovered": 2, 00:17:29.924 "num_base_bdevs_operational": 2, 00:17:29.924 "base_bdevs_list": [ 00:17:29.924 { 00:17:29.924 "name": "spare", 00:17:29.924 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:29.924 "is_configured": true, 00:17:29.924 "data_offset": 256, 00:17:29.924 "data_size": 7936 00:17:29.924 }, 00:17:29.924 { 00:17:29.924 "name": "BaseBdev2", 00:17:29.924 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:29.924 "is_configured": true, 00:17:29.924 "data_offset": 256, 00:17:29.924 "data_size": 7936 00:17:29.924 } 00:17:29.924 ] 00:17:29.924 }' 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.924 08:55:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 [2024-09-28 08:55:08.307182] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.551 [2024-09-28 08:55:08.307274] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.551 [2024-09-28 08:55:08.307387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.551 [2024-09-28 08:55:08.307501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.551 [2024-09-28 08:55:08.307565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.551 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:30.815 /dev/nbd0 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.815 1+0 records in 00:17:30.815 1+0 records out 00:17:30.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440539 s, 9.3 MB/s 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.815 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:31.080 /dev/nbd1 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.080 1+0 records in 00:17:31.080 1+0 records out 00:17:31.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427864 s, 9.6 MB/s 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.080 08:55:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.080 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.339 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.600 [2024-09-28 08:55:09.487878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:31.600 [2024-09-28 08:55:09.487937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.600 [2024-09-28 08:55:09.487963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:31.600 [2024-09-28 08:55:09.487971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.600 [2024-09-28 08:55:09.490430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.600 [2024-09-28 08:55:09.490468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:31.600 [2024-09-28 08:55:09.490571] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:31.600 [2024-09-28 08:55:09.490628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.600 [2024-09-28 08:55:09.490787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.600 spare 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.600 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.600 [2024-09-28 08:55:09.590692] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:31.600 [2024-09-28 08:55:09.590766] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.600 [2024-09-28 08:55:09.591059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:31.600 [2024-09-28 08:55:09.591232] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:31.600 [2024-09-28 08:55:09.591243] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:31.600 [2024-09-28 08:55:09.591407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.860 "name": "raid_bdev1", 00:17:31.860 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:31.860 "strip_size_kb": 0, 00:17:31.860 "state": "online", 00:17:31.860 "raid_level": "raid1", 00:17:31.860 "superblock": true, 00:17:31.860 "num_base_bdevs": 2, 00:17:31.860 "num_base_bdevs_discovered": 2, 00:17:31.860 "num_base_bdevs_operational": 2, 00:17:31.860 "base_bdevs_list": [ 00:17:31.860 { 00:17:31.860 "name": "spare", 00:17:31.860 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 256, 00:17:31.860 "data_size": 7936 00:17:31.860 }, 00:17:31.860 { 00:17:31.860 "name": "BaseBdev2", 00:17:31.860 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:31.860 "is_configured": true, 00:17:31.860 "data_offset": 256, 00:17:31.860 "data_size": 7936 00:17:31.860 } 00:17:31.860 ] 00:17:31.860 }' 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.860 08:55:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.119 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.120 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.120 "name": "raid_bdev1", 00:17:32.120 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:32.120 "strip_size_kb": 0, 00:17:32.120 "state": "online", 00:17:32.120 "raid_level": "raid1", 00:17:32.120 "superblock": true, 00:17:32.120 "num_base_bdevs": 2, 00:17:32.120 "num_base_bdevs_discovered": 2, 00:17:32.120 "num_base_bdevs_operational": 2, 00:17:32.120 "base_bdevs_list": [ 00:17:32.120 { 00:17:32.120 "name": "spare", 00:17:32.120 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:32.120 "is_configured": true, 00:17:32.120 "data_offset": 256, 00:17:32.120 "data_size": 7936 00:17:32.120 }, 00:17:32.120 { 00:17:32.120 "name": "BaseBdev2", 00:17:32.120 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:32.120 "is_configured": true, 00:17:32.120 "data_offset": 256, 00:17:32.120 "data_size": 7936 00:17:32.120 } 00:17:32.120 ] 00:17:32.120 }' 00:17:32.120 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 [2024-09-28 08:55:10.242719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.379 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.379 "name": "raid_bdev1", 00:17:32.379 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:32.379 "strip_size_kb": 0, 00:17:32.379 "state": "online", 00:17:32.379 "raid_level": "raid1", 00:17:32.379 "superblock": true, 00:17:32.379 "num_base_bdevs": 2, 00:17:32.379 "num_base_bdevs_discovered": 1, 00:17:32.379 "num_base_bdevs_operational": 1, 00:17:32.379 "base_bdevs_list": [ 00:17:32.379 { 00:17:32.379 "name": null, 00:17:32.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.380 "is_configured": false, 00:17:32.380 "data_offset": 0, 00:17:32.380 "data_size": 7936 00:17:32.380 }, 00:17:32.380 { 00:17:32.380 "name": "BaseBdev2", 00:17:32.380 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:32.380 "is_configured": true, 00:17:32.380 "data_offset": 256, 00:17:32.380 "data_size": 7936 00:17:32.380 } 00:17:32.380 ] 00:17:32.380 }' 00:17:32.380 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.380 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.952 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.952 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.952 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.952 [2024-09-28 08:55:10.701907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.952 [2024-09-28 08:55:10.702126] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.952 [2024-09-28 08:55:10.702192] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:32.952 [2024-09-28 08:55:10.702252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.952 [2024-09-28 08:55:10.718358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:32.952 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.952 08:55:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:32.952 [2024-09-28 08:55:10.720517] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.891 "name": "raid_bdev1", 00:17:33.891 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:33.891 "strip_size_kb": 0, 00:17:33.891 "state": "online", 00:17:33.891 "raid_level": "raid1", 00:17:33.891 "superblock": true, 00:17:33.891 "num_base_bdevs": 2, 00:17:33.891 "num_base_bdevs_discovered": 2, 00:17:33.891 "num_base_bdevs_operational": 2, 00:17:33.891 "process": { 00:17:33.891 "type": "rebuild", 00:17:33.891 "target": "spare", 00:17:33.891 "progress": { 00:17:33.891 "blocks": 2560, 00:17:33.891 "percent": 32 00:17:33.891 } 00:17:33.891 }, 00:17:33.891 "base_bdevs_list": [ 00:17:33.891 { 00:17:33.891 "name": "spare", 00:17:33.891 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:33.891 "is_configured": true, 00:17:33.891 "data_offset": 256, 00:17:33.891 "data_size": 7936 00:17:33.891 }, 00:17:33.891 { 00:17:33.891 "name": "BaseBdev2", 00:17:33.891 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:33.891 "is_configured": true, 00:17:33.891 "data_offset": 256, 00:17:33.891 "data_size": 7936 00:17:33.891 } 00:17:33.891 ] 00:17:33.891 }' 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.891 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.151 [2024-09-28 08:55:11.887647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.151 [2024-09-28 08:55:11.928997] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.151 [2024-09-28 08:55:11.929100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.151 [2024-09-28 08:55:11.929118] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.151 [2024-09-28 08:55:11.929128] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.151 08:55:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.151 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.151 "name": "raid_bdev1", 00:17:34.151 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:34.151 "strip_size_kb": 0, 00:17:34.151 "state": "online", 00:17:34.151 "raid_level": "raid1", 00:17:34.151 "superblock": true, 00:17:34.151 "num_base_bdevs": 2, 00:17:34.151 "num_base_bdevs_discovered": 1, 00:17:34.151 "num_base_bdevs_operational": 1, 00:17:34.151 "base_bdevs_list": [ 00:17:34.151 { 00:17:34.151 "name": null, 00:17:34.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.151 "is_configured": false, 00:17:34.151 "data_offset": 0, 00:17:34.151 "data_size": 7936 00:17:34.151 }, 00:17:34.151 { 00:17:34.151 "name": "BaseBdev2", 00:17:34.151 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:34.151 "is_configured": true, 00:17:34.151 "data_offset": 256, 00:17:34.151 "data_size": 7936 00:17:34.151 } 00:17:34.151 ] 00:17:34.151 }' 00:17:34.151 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.152 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.411 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.411 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.411 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.411 [2024-09-28 08:55:12.397521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.411 [2024-09-28 08:55:12.397647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.411 [2024-09-28 08:55:12.397702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:34.411 [2024-09-28 08:55:12.397735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.411 [2024-09-28 08:55:12.398282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.411 [2024-09-28 08:55:12.398344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.411 [2024-09-28 08:55:12.398461] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.411 [2024-09-28 08:55:12.398503] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:34.411 [2024-09-28 08:55:12.398542] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:34.411 [2024-09-28 08:55:12.398604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.670 [2024-09-28 08:55:12.414670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:34.670 spare 00:17:34.670 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.670 [2024-09-28 08:55:12.416897] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.670 08:55:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.608 "name": "raid_bdev1", 00:17:35.608 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:35.608 "strip_size_kb": 0, 00:17:35.608 "state": "online", 00:17:35.608 "raid_level": "raid1", 00:17:35.608 "superblock": true, 00:17:35.608 "num_base_bdevs": 2, 00:17:35.608 "num_base_bdevs_discovered": 2, 00:17:35.608 "num_base_bdevs_operational": 2, 00:17:35.608 "process": { 00:17:35.608 "type": "rebuild", 00:17:35.608 "target": "spare", 00:17:35.608 "progress": { 00:17:35.608 "blocks": 2560, 00:17:35.608 "percent": 32 00:17:35.608 } 00:17:35.608 }, 00:17:35.608 "base_bdevs_list": [ 00:17:35.608 { 00:17:35.608 "name": "spare", 00:17:35.608 "uuid": "4270c9a5-46ec-5fa8-b8be-6a31799cb852", 00:17:35.608 "is_configured": true, 00:17:35.608 "data_offset": 256, 00:17:35.608 "data_size": 7936 00:17:35.608 }, 00:17:35.608 { 00:17:35.608 "name": "BaseBdev2", 00:17:35.608 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:35.608 "is_configured": true, 00:17:35.608 "data_offset": 256, 00:17:35.608 "data_size": 7936 00:17:35.608 } 00:17:35.608 ] 00:17:35.608 }' 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.608 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.608 [2024-09-28 08:55:13.583950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.868 [2024-09-28 08:55:13.625479] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.868 [2024-09-28 08:55:13.625606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.868 [2024-09-28 08:55:13.625646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.868 [2024-09-28 08:55:13.625676] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.868 "name": "raid_bdev1", 00:17:35.868 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:35.868 "strip_size_kb": 0, 00:17:35.868 "state": "online", 00:17:35.868 "raid_level": "raid1", 00:17:35.868 "superblock": true, 00:17:35.868 "num_base_bdevs": 2, 00:17:35.868 "num_base_bdevs_discovered": 1, 00:17:35.868 "num_base_bdevs_operational": 1, 00:17:35.868 "base_bdevs_list": [ 00:17:35.868 { 00:17:35.868 "name": null, 00:17:35.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.868 "is_configured": false, 00:17:35.868 "data_offset": 0, 00:17:35.868 "data_size": 7936 00:17:35.868 }, 00:17:35.868 { 00:17:35.868 "name": "BaseBdev2", 00:17:35.868 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:35.868 "is_configured": true, 00:17:35.868 "data_offset": 256, 00:17:35.868 "data_size": 7936 00:17:35.868 } 00:17:35.868 ] 00:17:35.868 }' 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.868 08:55:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.129 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.390 "name": "raid_bdev1", 00:17:36.390 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:36.390 "strip_size_kb": 0, 00:17:36.390 "state": "online", 00:17:36.390 "raid_level": "raid1", 00:17:36.390 "superblock": true, 00:17:36.390 "num_base_bdevs": 2, 00:17:36.390 "num_base_bdevs_discovered": 1, 00:17:36.390 "num_base_bdevs_operational": 1, 00:17:36.390 "base_bdevs_list": [ 00:17:36.390 { 00:17:36.390 "name": null, 00:17:36.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.390 "is_configured": false, 00:17:36.390 "data_offset": 0, 00:17:36.390 "data_size": 7936 00:17:36.390 }, 00:17:36.390 { 00:17:36.390 "name": "BaseBdev2", 00:17:36.390 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:36.390 "is_configured": true, 00:17:36.390 "data_offset": 256, 00:17:36.390 "data_size": 7936 00:17:36.390 } 00:17:36.390 ] 00:17:36.390 }' 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.390 [2024-09-28 08:55:14.263331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.390 [2024-09-28 08:55:14.263389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.390 [2024-09-28 08:55:14.263413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:36.390 [2024-09-28 08:55:14.263423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.390 [2024-09-28 08:55:14.263971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.390 [2024-09-28 08:55:14.263997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.390 [2024-09-28 08:55:14.264084] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:36.390 [2024-09-28 08:55:14.264098] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.390 [2024-09-28 08:55:14.264112] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:36.390 [2024-09-28 08:55:14.264123] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:36.390 BaseBdev1 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.390 08:55:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.328 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.587 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.587 "name": "raid_bdev1", 00:17:37.587 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:37.587 "strip_size_kb": 0, 00:17:37.587 "state": "online", 00:17:37.587 "raid_level": "raid1", 00:17:37.587 "superblock": true, 00:17:37.587 "num_base_bdevs": 2, 00:17:37.587 "num_base_bdevs_discovered": 1, 00:17:37.587 "num_base_bdevs_operational": 1, 00:17:37.587 "base_bdevs_list": [ 00:17:37.587 { 00:17:37.587 "name": null, 00:17:37.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.587 "is_configured": false, 00:17:37.587 "data_offset": 0, 00:17:37.587 "data_size": 7936 00:17:37.587 }, 00:17:37.587 { 00:17:37.587 "name": "BaseBdev2", 00:17:37.587 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:37.587 "is_configured": true, 00:17:37.587 "data_offset": 256, 00:17:37.587 "data_size": 7936 00:17:37.587 } 00:17:37.587 ] 00:17:37.587 }' 00:17:37.587 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.587 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.847 "name": "raid_bdev1", 00:17:37.847 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:37.847 "strip_size_kb": 0, 00:17:37.847 "state": "online", 00:17:37.847 "raid_level": "raid1", 00:17:37.847 "superblock": true, 00:17:37.847 "num_base_bdevs": 2, 00:17:37.847 "num_base_bdevs_discovered": 1, 00:17:37.847 "num_base_bdevs_operational": 1, 00:17:37.847 "base_bdevs_list": [ 00:17:37.847 { 00:17:37.847 "name": null, 00:17:37.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.847 "is_configured": false, 00:17:37.847 "data_offset": 0, 00:17:37.847 "data_size": 7936 00:17:37.847 }, 00:17:37.847 { 00:17:37.847 "name": "BaseBdev2", 00:17:37.847 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:37.847 "is_configured": true, 00:17:37.847 "data_offset": 256, 00:17:37.847 "data_size": 7936 00:17:37.847 } 00:17:37.847 ] 00:17:37.847 }' 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.847 [2024-09-28 08:55:15.820788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.847 [2024-09-28 08:55:15.821020] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.847 [2024-09-28 08:55:15.821088] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:37.847 request: 00:17:37.847 { 00:17:37.847 "base_bdev": "BaseBdev1", 00:17:37.847 "raid_bdev": "raid_bdev1", 00:17:37.847 "method": "bdev_raid_add_base_bdev", 00:17:37.847 "req_id": 1 00:17:37.847 } 00:17:37.847 Got JSON-RPC error response 00:17:37.847 response: 00:17:37.847 { 00:17:37.847 "code": -22, 00:17:37.847 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:37.847 } 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.847 08:55:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.225 "name": "raid_bdev1", 00:17:39.225 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:39.225 "strip_size_kb": 0, 00:17:39.225 "state": "online", 00:17:39.225 "raid_level": "raid1", 00:17:39.225 "superblock": true, 00:17:39.225 "num_base_bdevs": 2, 00:17:39.225 "num_base_bdevs_discovered": 1, 00:17:39.225 "num_base_bdevs_operational": 1, 00:17:39.225 "base_bdevs_list": [ 00:17:39.225 { 00:17:39.225 "name": null, 00:17:39.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.225 "is_configured": false, 00:17:39.225 "data_offset": 0, 00:17:39.225 "data_size": 7936 00:17:39.225 }, 00:17:39.225 { 00:17:39.225 "name": "BaseBdev2", 00:17:39.225 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:39.225 "is_configured": true, 00:17:39.225 "data_offset": 256, 00:17:39.225 "data_size": 7936 00:17:39.225 } 00:17:39.225 ] 00:17:39.225 }' 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.225 08:55:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.485 "name": "raid_bdev1", 00:17:39.485 "uuid": "6b60c067-32ec-463c-baeb-f802713f5df4", 00:17:39.485 "strip_size_kb": 0, 00:17:39.485 "state": "online", 00:17:39.485 "raid_level": "raid1", 00:17:39.485 "superblock": true, 00:17:39.485 "num_base_bdevs": 2, 00:17:39.485 "num_base_bdevs_discovered": 1, 00:17:39.485 "num_base_bdevs_operational": 1, 00:17:39.485 "base_bdevs_list": [ 00:17:39.485 { 00:17:39.485 "name": null, 00:17:39.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.485 "is_configured": false, 00:17:39.485 "data_offset": 0, 00:17:39.485 "data_size": 7936 00:17:39.485 }, 00:17:39.485 { 00:17:39.485 "name": "BaseBdev2", 00:17:39.485 "uuid": "53a2eb0d-5b7a-5279-8c78-63df3065a469", 00:17:39.485 "is_configured": true, 00:17:39.485 "data_offset": 256, 00:17:39.485 "data_size": 7936 00:17:39.485 } 00:17:39.485 ] 00:17:39.485 }' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86534 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86534 ']' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86534 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86534 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86534' 00:17:39.485 killing process with pid 86534 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86534 00:17:39.485 Received shutdown signal, test time was about 60.000000 seconds 00:17:39.485 00:17:39.485 Latency(us) 00:17:39.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.485 =================================================================================================================== 00:17:39.485 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.485 08:55:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86534 00:17:39.485 [2024-09-28 08:55:17.459905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.485 [2024-09-28 08:55:17.460041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.485 [2024-09-28 08:55:17.460100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.485 [2024-09-28 08:55:17.460112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:40.053 [2024-09-28 08:55:17.770045] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.435 08:55:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:41.435 ************************************ 00:17:41.435 END TEST raid_rebuild_test_sb_4k 00:17:41.435 ************************************ 00:17:41.435 00:17:41.435 real 0m20.036s 00:17:41.435 user 0m25.986s 00:17:41.435 sys 0m2.767s 00:17:41.435 08:55:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:41.435 08:55:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.435 08:55:19 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:41.435 08:55:19 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:41.435 08:55:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:41.435 08:55:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.435 08:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.435 ************************************ 00:17:41.435 START TEST raid_state_function_test_sb_md_separate 00:17:41.435 ************************************ 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:41.435 Process raid pid: 87221 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87221 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87221' 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87221 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87221 ']' 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.435 08:55:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.435 [2024-09-28 08:55:19.267653] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:41.435 [2024-09-28 08:55:19.267861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.694 [2024-09-28 08:55:19.437100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.694 [2024-09-28 08:55:19.681435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.954 [2024-09-28 08:55:19.909990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.954 [2024-09-28 08:55:19.910102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.214 [2024-09-28 08:55:20.081003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.214 [2024-09-28 08:55:20.081061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.214 [2024-09-28 08:55:20.081071] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.214 [2024-09-28 08:55:20.081080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.214 "name": "Existed_Raid", 00:17:42.214 "uuid": "7f8db1a5-834b-4322-abd6-a58e4c4b983d", 00:17:42.214 "strip_size_kb": 0, 00:17:42.214 "state": "configuring", 00:17:42.214 "raid_level": "raid1", 00:17:42.214 "superblock": true, 00:17:42.214 "num_base_bdevs": 2, 00:17:42.214 "num_base_bdevs_discovered": 0, 00:17:42.214 "num_base_bdevs_operational": 2, 00:17:42.214 "base_bdevs_list": [ 00:17:42.214 { 00:17:42.214 "name": "BaseBdev1", 00:17:42.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.214 "is_configured": false, 00:17:42.214 "data_offset": 0, 00:17:42.214 "data_size": 0 00:17:42.214 }, 00:17:42.214 { 00:17:42.214 "name": "BaseBdev2", 00:17:42.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.214 "is_configured": false, 00:17:42.214 "data_offset": 0, 00:17:42.214 "data_size": 0 00:17:42.214 } 00:17:42.214 ] 00:17:42.214 }' 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.214 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 [2024-09-28 08:55:20.576037] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.783 [2024-09-28 08:55:20.576129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 [2024-09-28 08:55:20.588051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.783 [2024-09-28 08:55:20.588125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.783 [2024-09-28 08:55:20.588151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.783 [2024-09-28 08:55:20.588175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 [2024-09-28 08:55:20.678716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.783 BaseBdev1 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.783 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.783 [ 00:17:42.783 { 00:17:42.783 "name": "BaseBdev1", 00:17:42.783 "aliases": [ 00:17:42.783 "1bdb0703-2443-4b5e-9360-9db51102b841" 00:17:42.783 ], 00:17:42.783 "product_name": "Malloc disk", 00:17:42.783 "block_size": 4096, 00:17:42.783 "num_blocks": 8192, 00:17:42.783 "uuid": "1bdb0703-2443-4b5e-9360-9db51102b841", 00:17:42.783 "md_size": 32, 00:17:42.784 "md_interleave": false, 00:17:42.784 "dif_type": 0, 00:17:42.784 "assigned_rate_limits": { 00:17:42.784 "rw_ios_per_sec": 0, 00:17:42.784 "rw_mbytes_per_sec": 0, 00:17:42.784 "r_mbytes_per_sec": 0, 00:17:42.784 "w_mbytes_per_sec": 0 00:17:42.784 }, 00:17:42.784 "claimed": true, 00:17:42.784 "claim_type": "exclusive_write", 00:17:42.784 "zoned": false, 00:17:42.784 "supported_io_types": { 00:17:42.784 "read": true, 00:17:42.784 "write": true, 00:17:42.784 "unmap": true, 00:17:42.784 "flush": true, 00:17:42.784 "reset": true, 00:17:42.784 "nvme_admin": false, 00:17:42.784 "nvme_io": false, 00:17:42.784 "nvme_io_md": false, 00:17:42.784 "write_zeroes": true, 00:17:42.784 "zcopy": true, 00:17:42.784 "get_zone_info": false, 00:17:42.784 "zone_management": false, 00:17:42.784 "zone_append": false, 00:17:42.784 "compare": false, 00:17:42.784 "compare_and_write": false, 00:17:42.784 "abort": true, 00:17:42.784 "seek_hole": false, 00:17:42.784 "seek_data": false, 00:17:42.784 "copy": true, 00:17:42.784 "nvme_iov_md": false 00:17:42.784 }, 00:17:42.784 "memory_domains": [ 00:17:42.784 { 00:17:42.784 "dma_device_id": "system", 00:17:42.784 "dma_device_type": 1 00:17:42.784 }, 00:17:42.784 { 00:17:42.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.784 "dma_device_type": 2 00:17:42.784 } 00:17:42.784 ], 00:17:42.784 "driver_specific": {} 00:17:42.784 } 00:17:42.784 ] 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.784 "name": "Existed_Raid", 00:17:42.784 "uuid": "ab0fab4e-9e0f-4229-b52d-96fd468a74c2", 00:17:42.784 "strip_size_kb": 0, 00:17:42.784 "state": "configuring", 00:17:42.784 "raid_level": "raid1", 00:17:42.784 "superblock": true, 00:17:42.784 "num_base_bdevs": 2, 00:17:42.784 "num_base_bdevs_discovered": 1, 00:17:42.784 "num_base_bdevs_operational": 2, 00:17:42.784 "base_bdevs_list": [ 00:17:42.784 { 00:17:42.784 "name": "BaseBdev1", 00:17:42.784 "uuid": "1bdb0703-2443-4b5e-9360-9db51102b841", 00:17:42.784 "is_configured": true, 00:17:42.784 "data_offset": 256, 00:17:42.784 "data_size": 7936 00:17:42.784 }, 00:17:42.784 { 00:17:42.784 "name": "BaseBdev2", 00:17:42.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.784 "is_configured": false, 00:17:42.784 "data_offset": 0, 00:17:42.784 "data_size": 0 00:17:42.784 } 00:17:42.784 ] 00:17:42.784 }' 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.784 08:55:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.352 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.352 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.353 [2024-09-28 08:55:21.141937] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.353 [2024-09-28 08:55:21.141977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.353 [2024-09-28 08:55:21.149992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.353 [2024-09-28 08:55:21.151960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.353 [2024-09-28 08:55:21.151997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.353 "name": "Existed_Raid", 00:17:43.353 "uuid": "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1", 00:17:43.353 "strip_size_kb": 0, 00:17:43.353 "state": "configuring", 00:17:43.353 "raid_level": "raid1", 00:17:43.353 "superblock": true, 00:17:43.353 "num_base_bdevs": 2, 00:17:43.353 "num_base_bdevs_discovered": 1, 00:17:43.353 "num_base_bdevs_operational": 2, 00:17:43.353 "base_bdevs_list": [ 00:17:43.353 { 00:17:43.353 "name": "BaseBdev1", 00:17:43.353 "uuid": "1bdb0703-2443-4b5e-9360-9db51102b841", 00:17:43.353 "is_configured": true, 00:17:43.353 "data_offset": 256, 00:17:43.353 "data_size": 7936 00:17:43.353 }, 00:17:43.353 { 00:17:43.353 "name": "BaseBdev2", 00:17:43.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.353 "is_configured": false, 00:17:43.353 "data_offset": 0, 00:17:43.353 "data_size": 0 00:17:43.353 } 00:17:43.353 ] 00:17:43.353 }' 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.353 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.613 [2024-09-28 08:55:21.582774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.613 [2024-09-28 08:55:21.583075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:43.613 [2024-09-28 08:55:21.583128] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.613 [2024-09-28 08:55:21.583250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:43.613 [2024-09-28 08:55:21.583415] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:43.613 [2024-09-28 08:55:21.583454] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:43.613 [2024-09-28 08:55:21.583616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.613 BaseBdev2 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.613 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.872 [ 00:17:43.872 { 00:17:43.872 "name": "BaseBdev2", 00:17:43.872 "aliases": [ 00:17:43.872 "f4537b64-c4d1-45f1-b997-efb743282af7" 00:17:43.872 ], 00:17:43.872 "product_name": "Malloc disk", 00:17:43.872 "block_size": 4096, 00:17:43.872 "num_blocks": 8192, 00:17:43.872 "uuid": "f4537b64-c4d1-45f1-b997-efb743282af7", 00:17:43.872 "md_size": 32, 00:17:43.872 "md_interleave": false, 00:17:43.872 "dif_type": 0, 00:17:43.872 "assigned_rate_limits": { 00:17:43.872 "rw_ios_per_sec": 0, 00:17:43.872 "rw_mbytes_per_sec": 0, 00:17:43.872 "r_mbytes_per_sec": 0, 00:17:43.872 "w_mbytes_per_sec": 0 00:17:43.872 }, 00:17:43.872 "claimed": true, 00:17:43.872 "claim_type": "exclusive_write", 00:17:43.872 "zoned": false, 00:17:43.872 "supported_io_types": { 00:17:43.872 "read": true, 00:17:43.872 "write": true, 00:17:43.872 "unmap": true, 00:17:43.872 "flush": true, 00:17:43.872 "reset": true, 00:17:43.872 "nvme_admin": false, 00:17:43.872 "nvme_io": false, 00:17:43.872 "nvme_io_md": false, 00:17:43.872 "write_zeroes": true, 00:17:43.872 "zcopy": true, 00:17:43.872 "get_zone_info": false, 00:17:43.872 "zone_management": false, 00:17:43.872 "zone_append": false, 00:17:43.872 "compare": false, 00:17:43.872 "compare_and_write": false, 00:17:43.872 "abort": true, 00:17:43.872 "seek_hole": false, 00:17:43.872 "seek_data": false, 00:17:43.872 "copy": true, 00:17:43.872 "nvme_iov_md": false 00:17:43.872 }, 00:17:43.872 "memory_domains": [ 00:17:43.872 { 00:17:43.872 "dma_device_id": "system", 00:17:43.873 "dma_device_type": 1 00:17:43.873 }, 00:17:43.873 { 00:17:43.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.873 "dma_device_type": 2 00:17:43.873 } 00:17:43.873 ], 00:17:43.873 "driver_specific": {} 00:17:43.873 } 00:17:43.873 ] 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.873 "name": "Existed_Raid", 00:17:43.873 "uuid": "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1", 00:17:43.873 "strip_size_kb": 0, 00:17:43.873 "state": "online", 00:17:43.873 "raid_level": "raid1", 00:17:43.873 "superblock": true, 00:17:43.873 "num_base_bdevs": 2, 00:17:43.873 "num_base_bdevs_discovered": 2, 00:17:43.873 "num_base_bdevs_operational": 2, 00:17:43.873 "base_bdevs_list": [ 00:17:43.873 { 00:17:43.873 "name": "BaseBdev1", 00:17:43.873 "uuid": "1bdb0703-2443-4b5e-9360-9db51102b841", 00:17:43.873 "is_configured": true, 00:17:43.873 "data_offset": 256, 00:17:43.873 "data_size": 7936 00:17:43.873 }, 00:17:43.873 { 00:17:43.873 "name": "BaseBdev2", 00:17:43.873 "uuid": "f4537b64-c4d1-45f1-b997-efb743282af7", 00:17:43.873 "is_configured": true, 00:17:43.873 "data_offset": 256, 00:17:43.873 "data_size": 7936 00:17:43.873 } 00:17:43.873 ] 00:17:43.873 }' 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.873 08:55:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.132 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.132 [2024-09-28 08:55:22.110151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.392 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.392 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.392 "name": "Existed_Raid", 00:17:44.392 "aliases": [ 00:17:44.392 "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1" 00:17:44.392 ], 00:17:44.392 "product_name": "Raid Volume", 00:17:44.392 "block_size": 4096, 00:17:44.392 "num_blocks": 7936, 00:17:44.392 "uuid": "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1", 00:17:44.392 "md_size": 32, 00:17:44.392 "md_interleave": false, 00:17:44.392 "dif_type": 0, 00:17:44.392 "assigned_rate_limits": { 00:17:44.392 "rw_ios_per_sec": 0, 00:17:44.392 "rw_mbytes_per_sec": 0, 00:17:44.392 "r_mbytes_per_sec": 0, 00:17:44.392 "w_mbytes_per_sec": 0 00:17:44.392 }, 00:17:44.392 "claimed": false, 00:17:44.392 "zoned": false, 00:17:44.392 "supported_io_types": { 00:17:44.392 "read": true, 00:17:44.392 "write": true, 00:17:44.392 "unmap": false, 00:17:44.392 "flush": false, 00:17:44.392 "reset": true, 00:17:44.392 "nvme_admin": false, 00:17:44.392 "nvme_io": false, 00:17:44.392 "nvme_io_md": false, 00:17:44.392 "write_zeroes": true, 00:17:44.392 "zcopy": false, 00:17:44.392 "get_zone_info": false, 00:17:44.392 "zone_management": false, 00:17:44.392 "zone_append": false, 00:17:44.392 "compare": false, 00:17:44.392 "compare_and_write": false, 00:17:44.392 "abort": false, 00:17:44.392 "seek_hole": false, 00:17:44.392 "seek_data": false, 00:17:44.392 "copy": false, 00:17:44.392 "nvme_iov_md": false 00:17:44.392 }, 00:17:44.392 "memory_domains": [ 00:17:44.392 { 00:17:44.392 "dma_device_id": "system", 00:17:44.392 "dma_device_type": 1 00:17:44.392 }, 00:17:44.392 { 00:17:44.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.392 "dma_device_type": 2 00:17:44.392 }, 00:17:44.392 { 00:17:44.392 "dma_device_id": "system", 00:17:44.392 "dma_device_type": 1 00:17:44.392 }, 00:17:44.392 { 00:17:44.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.392 "dma_device_type": 2 00:17:44.392 } 00:17:44.392 ], 00:17:44.393 "driver_specific": { 00:17:44.393 "raid": { 00:17:44.393 "uuid": "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1", 00:17:44.393 "strip_size_kb": 0, 00:17:44.393 "state": "online", 00:17:44.393 "raid_level": "raid1", 00:17:44.393 "superblock": true, 00:17:44.393 "num_base_bdevs": 2, 00:17:44.393 "num_base_bdevs_discovered": 2, 00:17:44.393 "num_base_bdevs_operational": 2, 00:17:44.393 "base_bdevs_list": [ 00:17:44.393 { 00:17:44.393 "name": "BaseBdev1", 00:17:44.393 "uuid": "1bdb0703-2443-4b5e-9360-9db51102b841", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 256, 00:17:44.393 "data_size": 7936 00:17:44.393 }, 00:17:44.393 { 00:17:44.393 "name": "BaseBdev2", 00:17:44.393 "uuid": "f4537b64-c4d1-45f1-b997-efb743282af7", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 256, 00:17:44.393 "data_size": 7936 00:17:44.393 } 00:17:44.393 ] 00:17:44.393 } 00:17:44.393 } 00:17:44.393 }' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:44.393 BaseBdev2' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.393 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.393 [2024-09-28 08:55:22.329590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.653 "name": "Existed_Raid", 00:17:44.653 "uuid": "b5bf2de3-ca92-43de-a0c3-bb07f950b9d1", 00:17:44.653 "strip_size_kb": 0, 00:17:44.653 "state": "online", 00:17:44.653 "raid_level": "raid1", 00:17:44.653 "superblock": true, 00:17:44.653 "num_base_bdevs": 2, 00:17:44.653 "num_base_bdevs_discovered": 1, 00:17:44.653 "num_base_bdevs_operational": 1, 00:17:44.653 "base_bdevs_list": [ 00:17:44.653 { 00:17:44.653 "name": null, 00:17:44.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.653 "is_configured": false, 00:17:44.653 "data_offset": 0, 00:17:44.653 "data_size": 7936 00:17:44.653 }, 00:17:44.653 { 00:17:44.653 "name": "BaseBdev2", 00:17:44.653 "uuid": "f4537b64-c4d1-45f1-b997-efb743282af7", 00:17:44.653 "is_configured": true, 00:17:44.653 "data_offset": 256, 00:17:44.653 "data_size": 7936 00:17:44.653 } 00:17:44.653 ] 00:17:44.653 }' 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.653 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.913 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.173 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:45.173 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.173 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:45.173 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.173 08:55:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.173 [2024-09-28 08:55:22.927978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.173 [2024-09-28 08:55:22.928092] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.173 [2024-09-28 08:55:23.033704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.173 [2024-09-28 08:55:23.033830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.173 [2024-09-28 08:55:23.033871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87221 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87221 ']' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87221 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87221 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.173 killing process with pid 87221 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87221' 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87221 00:17:45.173 [2024-09-28 08:55:23.119740] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.173 08:55:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87221 00:17:45.173 [2024-09-28 08:55:23.136112] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.554 08:55:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:46.554 00:17:46.554 real 0m5.292s 00:17:46.554 user 0m7.365s 00:17:46.554 sys 0m0.985s 00:17:46.554 08:55:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.554 08:55:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 ************************************ 00:17:46.554 END TEST raid_state_function_test_sb_md_separate 00:17:46.554 ************************************ 00:17:46.554 08:55:24 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:46.554 08:55:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:46.554 08:55:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.554 08:55:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.554 ************************************ 00:17:46.554 START TEST raid_superblock_test_md_separate 00:17:46.554 ************************************ 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87473 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87473 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87473 ']' 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.554 08:55:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.813 [2024-09-28 08:55:24.631682] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:46.813 [2024-09-28 08:55:24.631883] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87473 ] 00:17:46.813 [2024-09-28 08:55:24.800444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.071 [2024-09-28 08:55:25.035962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.329 [2024-09-28 08:55:25.259578] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.330 [2024-09-28 08:55:25.259708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.590 malloc1 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.590 [2024-09-28 08:55:25.488227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.590 [2024-09-28 08:55:25.488364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.590 [2024-09-28 08:55:25.488411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.590 [2024-09-28 08:55:25.488440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.590 [2024-09-28 08:55:25.490345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.590 [2024-09-28 08:55:25.490413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.590 pt1 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.590 malloc2 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.590 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.590 [2024-09-28 08:55:25.582538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.590 [2024-09-28 08:55:25.582593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.590 [2024-09-28 08:55:25.582617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.590 [2024-09-28 08:55:25.582627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.850 [2024-09-28 08:55:25.584819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.850 [2024-09-28 08:55:25.584852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.850 pt2 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.850 [2024-09-28 08:55:25.594586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.850 [2024-09-28 08:55:25.596697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.850 [2024-09-28 08:55:25.596874] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.850 [2024-09-28 08:55:25.596887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.850 [2024-09-28 08:55:25.596963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:47.850 [2024-09-28 08:55:25.597098] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.850 [2024-09-28 08:55:25.597116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.850 [2024-09-28 08:55:25.597215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.850 "name": "raid_bdev1", 00:17:47.850 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:47.850 "strip_size_kb": 0, 00:17:47.850 "state": "online", 00:17:47.850 "raid_level": "raid1", 00:17:47.850 "superblock": true, 00:17:47.850 "num_base_bdevs": 2, 00:17:47.850 "num_base_bdevs_discovered": 2, 00:17:47.850 "num_base_bdevs_operational": 2, 00:17:47.850 "base_bdevs_list": [ 00:17:47.850 { 00:17:47.850 "name": "pt1", 00:17:47.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.850 "is_configured": true, 00:17:47.850 "data_offset": 256, 00:17:47.850 "data_size": 7936 00:17:47.850 }, 00:17:47.850 { 00:17:47.850 "name": "pt2", 00:17:47.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.850 "is_configured": true, 00:17:47.850 "data_offset": 256, 00:17:47.850 "data_size": 7936 00:17:47.850 } 00:17:47.850 ] 00:17:47.850 }' 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.850 08:55:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.109 [2024-09-28 08:55:26.049975] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.109 "name": "raid_bdev1", 00:17:48.109 "aliases": [ 00:17:48.109 "1241fc3e-f5a2-4366-9a89-e2c47bf8d568" 00:17:48.109 ], 00:17:48.109 "product_name": "Raid Volume", 00:17:48.109 "block_size": 4096, 00:17:48.109 "num_blocks": 7936, 00:17:48.109 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:48.109 "md_size": 32, 00:17:48.109 "md_interleave": false, 00:17:48.109 "dif_type": 0, 00:17:48.109 "assigned_rate_limits": { 00:17:48.109 "rw_ios_per_sec": 0, 00:17:48.109 "rw_mbytes_per_sec": 0, 00:17:48.109 "r_mbytes_per_sec": 0, 00:17:48.109 "w_mbytes_per_sec": 0 00:17:48.109 }, 00:17:48.109 "claimed": false, 00:17:48.109 "zoned": false, 00:17:48.109 "supported_io_types": { 00:17:48.109 "read": true, 00:17:48.109 "write": true, 00:17:48.109 "unmap": false, 00:17:48.109 "flush": false, 00:17:48.109 "reset": true, 00:17:48.109 "nvme_admin": false, 00:17:48.109 "nvme_io": false, 00:17:48.109 "nvme_io_md": false, 00:17:48.109 "write_zeroes": true, 00:17:48.109 "zcopy": false, 00:17:48.109 "get_zone_info": false, 00:17:48.109 "zone_management": false, 00:17:48.109 "zone_append": false, 00:17:48.109 "compare": false, 00:17:48.109 "compare_and_write": false, 00:17:48.109 "abort": false, 00:17:48.109 "seek_hole": false, 00:17:48.109 "seek_data": false, 00:17:48.109 "copy": false, 00:17:48.109 "nvme_iov_md": false 00:17:48.109 }, 00:17:48.109 "memory_domains": [ 00:17:48.109 { 00:17:48.109 "dma_device_id": "system", 00:17:48.109 "dma_device_type": 1 00:17:48.109 }, 00:17:48.109 { 00:17:48.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.109 "dma_device_type": 2 00:17:48.109 }, 00:17:48.109 { 00:17:48.109 "dma_device_id": "system", 00:17:48.109 "dma_device_type": 1 00:17:48.109 }, 00:17:48.109 { 00:17:48.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.109 "dma_device_type": 2 00:17:48.109 } 00:17:48.109 ], 00:17:48.109 "driver_specific": { 00:17:48.109 "raid": { 00:17:48.109 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:48.109 "strip_size_kb": 0, 00:17:48.109 "state": "online", 00:17:48.109 "raid_level": "raid1", 00:17:48.109 "superblock": true, 00:17:48.109 "num_base_bdevs": 2, 00:17:48.109 "num_base_bdevs_discovered": 2, 00:17:48.109 "num_base_bdevs_operational": 2, 00:17:48.109 "base_bdevs_list": [ 00:17:48.109 { 00:17:48.109 "name": "pt1", 00:17:48.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.109 "is_configured": true, 00:17:48.109 "data_offset": 256, 00:17:48.109 "data_size": 7936 00:17:48.109 }, 00:17:48.109 { 00:17:48.109 "name": "pt2", 00:17:48.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.109 "is_configured": true, 00:17:48.109 "data_offset": 256, 00:17:48.109 "data_size": 7936 00:17:48.109 } 00:17:48.109 ] 00:17:48.109 } 00:17:48.109 } 00:17:48.109 }' 00:17:48.109 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:48.367 pt2' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.367 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.368 [2024-09-28 08:55:26.265552] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1241fc3e-f5a2-4366-9a89-e2c47bf8d568 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 1241fc3e-f5a2-4366-9a89-e2c47bf8d568 ']' 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.368 [2024-09-28 08:55:26.293289] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.368 [2024-09-28 08:55:26.293311] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.368 [2024-09-28 08:55:26.293372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.368 [2024-09-28 08:55:26.293418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.368 [2024-09-28 08:55:26.293429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.368 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.626 [2024-09-28 08:55:26.425095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:48.626 [2024-09-28 08:55:26.427038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:48.626 [2024-09-28 08:55:26.427142] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:48.626 [2024-09-28 08:55:26.427223] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:48.626 [2024-09-28 08:55:26.427277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.626 [2024-09-28 08:55:26.427302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:48.626 request: 00:17:48.626 { 00:17:48.626 "name": "raid_bdev1", 00:17:48.626 "raid_level": "raid1", 00:17:48.626 "base_bdevs": [ 00:17:48.626 "malloc1", 00:17:48.626 "malloc2" 00:17:48.626 ], 00:17:48.626 "superblock": false, 00:17:48.626 "method": "bdev_raid_create", 00:17:48.626 "req_id": 1 00:17:48.626 } 00:17:48.626 Got JSON-RPC error response 00:17:48.626 response: 00:17:48.626 { 00:17:48.626 "code": -17, 00:17:48.626 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:48.626 } 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.626 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.626 [2024-09-28 08:55:26.488951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.626 [2024-09-28 08:55:26.489033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.626 [2024-09-28 08:55:26.489048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:48.626 [2024-09-28 08:55:26.489059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.626 [2024-09-28 08:55:26.491145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.626 [2024-09-28 08:55:26.491180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.627 [2024-09-28 08:55:26.491215] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:48.627 [2024-09-28 08:55:26.491268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.627 pt1 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.627 "name": "raid_bdev1", 00:17:48.627 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:48.627 "strip_size_kb": 0, 00:17:48.627 "state": "configuring", 00:17:48.627 "raid_level": "raid1", 00:17:48.627 "superblock": true, 00:17:48.627 "num_base_bdevs": 2, 00:17:48.627 "num_base_bdevs_discovered": 1, 00:17:48.627 "num_base_bdevs_operational": 2, 00:17:48.627 "base_bdevs_list": [ 00:17:48.627 { 00:17:48.627 "name": "pt1", 00:17:48.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.627 "is_configured": true, 00:17:48.627 "data_offset": 256, 00:17:48.627 "data_size": 7936 00:17:48.627 }, 00:17:48.627 { 00:17:48.627 "name": null, 00:17:48.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.627 "is_configured": false, 00:17:48.627 "data_offset": 256, 00:17:48.627 "data_size": 7936 00:17:48.627 } 00:17:48.627 ] 00:17:48.627 }' 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.627 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.195 [2024-09-28 08:55:26.904229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.195 [2024-09-28 08:55:26.904318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.195 [2024-09-28 08:55:26.904351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:49.195 [2024-09-28 08:55:26.904380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.195 [2024-09-28 08:55:26.904547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.195 [2024-09-28 08:55:26.904610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.195 [2024-09-28 08:55:26.904675] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.195 [2024-09-28 08:55:26.904722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.195 [2024-09-28 08:55:26.904838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.195 [2024-09-28 08:55:26.904874] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:49.195 [2024-09-28 08:55:26.904950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:49.195 [2024-09-28 08:55:26.905084] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.195 [2024-09-28 08:55:26.905116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:49.195 [2024-09-28 08:55:26.905229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.195 pt2 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.195 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.196 "name": "raid_bdev1", 00:17:49.196 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:49.196 "strip_size_kb": 0, 00:17:49.196 "state": "online", 00:17:49.196 "raid_level": "raid1", 00:17:49.196 "superblock": true, 00:17:49.196 "num_base_bdevs": 2, 00:17:49.196 "num_base_bdevs_discovered": 2, 00:17:49.196 "num_base_bdevs_operational": 2, 00:17:49.196 "base_bdevs_list": [ 00:17:49.196 { 00:17:49.196 "name": "pt1", 00:17:49.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.196 "is_configured": true, 00:17:49.196 "data_offset": 256, 00:17:49.196 "data_size": 7936 00:17:49.196 }, 00:17:49.196 { 00:17:49.196 "name": "pt2", 00:17:49.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.196 "is_configured": true, 00:17:49.196 "data_offset": 256, 00:17:49.196 "data_size": 7936 00:17:49.196 } 00:17:49.196 ] 00:17:49.196 }' 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.196 08:55:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.454 [2024-09-28 08:55:27.383693] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.454 "name": "raid_bdev1", 00:17:49.454 "aliases": [ 00:17:49.454 "1241fc3e-f5a2-4366-9a89-e2c47bf8d568" 00:17:49.454 ], 00:17:49.454 "product_name": "Raid Volume", 00:17:49.454 "block_size": 4096, 00:17:49.454 "num_blocks": 7936, 00:17:49.454 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:49.454 "md_size": 32, 00:17:49.454 "md_interleave": false, 00:17:49.454 "dif_type": 0, 00:17:49.454 "assigned_rate_limits": { 00:17:49.454 "rw_ios_per_sec": 0, 00:17:49.454 "rw_mbytes_per_sec": 0, 00:17:49.454 "r_mbytes_per_sec": 0, 00:17:49.454 "w_mbytes_per_sec": 0 00:17:49.454 }, 00:17:49.454 "claimed": false, 00:17:49.454 "zoned": false, 00:17:49.454 "supported_io_types": { 00:17:49.454 "read": true, 00:17:49.454 "write": true, 00:17:49.454 "unmap": false, 00:17:49.454 "flush": false, 00:17:49.454 "reset": true, 00:17:49.454 "nvme_admin": false, 00:17:49.454 "nvme_io": false, 00:17:49.454 "nvme_io_md": false, 00:17:49.454 "write_zeroes": true, 00:17:49.454 "zcopy": false, 00:17:49.454 "get_zone_info": false, 00:17:49.454 "zone_management": false, 00:17:49.454 "zone_append": false, 00:17:49.454 "compare": false, 00:17:49.454 "compare_and_write": false, 00:17:49.454 "abort": false, 00:17:49.454 "seek_hole": false, 00:17:49.454 "seek_data": false, 00:17:49.454 "copy": false, 00:17:49.454 "nvme_iov_md": false 00:17:49.454 }, 00:17:49.454 "memory_domains": [ 00:17:49.454 { 00:17:49.454 "dma_device_id": "system", 00:17:49.454 "dma_device_type": 1 00:17:49.454 }, 00:17:49.454 { 00:17:49.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.454 "dma_device_type": 2 00:17:49.454 }, 00:17:49.454 { 00:17:49.454 "dma_device_id": "system", 00:17:49.454 "dma_device_type": 1 00:17:49.454 }, 00:17:49.454 { 00:17:49.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.454 "dma_device_type": 2 00:17:49.454 } 00:17:49.454 ], 00:17:49.454 "driver_specific": { 00:17:49.454 "raid": { 00:17:49.454 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:49.454 "strip_size_kb": 0, 00:17:49.454 "state": "online", 00:17:49.454 "raid_level": "raid1", 00:17:49.454 "superblock": true, 00:17:49.454 "num_base_bdevs": 2, 00:17:49.454 "num_base_bdevs_discovered": 2, 00:17:49.454 "num_base_bdevs_operational": 2, 00:17:49.454 "base_bdevs_list": [ 00:17:49.454 { 00:17:49.454 "name": "pt1", 00:17:49.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.454 "is_configured": true, 00:17:49.454 "data_offset": 256, 00:17:49.454 "data_size": 7936 00:17:49.454 }, 00:17:49.454 { 00:17:49.454 "name": "pt2", 00:17:49.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.454 "is_configured": true, 00:17:49.454 "data_offset": 256, 00:17:49.454 "data_size": 7936 00:17:49.454 } 00:17:49.454 ] 00:17:49.454 } 00:17:49.454 } 00:17:49.454 }' 00:17:49.454 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.716 pt2' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:49.716 [2024-09-28 08:55:27.631214] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 1241fc3e-f5a2-4366-9a89-e2c47bf8d568 '!=' 1241fc3e-f5a2-4366-9a89-e2c47bf8d568 ']' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.716 [2024-09-28 08:55:27.678975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.716 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.976 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.976 "name": "raid_bdev1", 00:17:49.976 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:49.976 "strip_size_kb": 0, 00:17:49.976 "state": "online", 00:17:49.976 "raid_level": "raid1", 00:17:49.976 "superblock": true, 00:17:49.976 "num_base_bdevs": 2, 00:17:49.976 "num_base_bdevs_discovered": 1, 00:17:49.976 "num_base_bdevs_operational": 1, 00:17:49.976 "base_bdevs_list": [ 00:17:49.976 { 00:17:49.976 "name": null, 00:17:49.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.976 "is_configured": false, 00:17:49.976 "data_offset": 0, 00:17:49.976 "data_size": 7936 00:17:49.976 }, 00:17:49.976 { 00:17:49.976 "name": "pt2", 00:17:49.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.976 "is_configured": true, 00:17:49.976 "data_offset": 256, 00:17:49.976 "data_size": 7936 00:17:49.976 } 00:17:49.976 ] 00:17:49.976 }' 00:17:49.976 08:55:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.976 08:55:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 [2024-09-28 08:55:28.130160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.236 [2024-09-28 08:55:28.130224] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.236 [2024-09-28 08:55:28.130290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.236 [2024-09-28 08:55:28.130336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.236 [2024-09-28 08:55:28.130367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.236 [2024-09-28 08:55:28.206030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.236 [2024-09-28 08:55:28.206074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.236 [2024-09-28 08:55:28.206087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:50.236 [2024-09-28 08:55:28.206096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.236 [2024-09-28 08:55:28.208221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.236 [2024-09-28 08:55:28.208258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.236 [2024-09-28 08:55:28.208294] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.236 [2024-09-28 08:55:28.208335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.236 [2024-09-28 08:55:28.208399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:50.236 [2024-09-28 08:55:28.208409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.236 [2024-09-28 08:55:28.208468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:50.236 [2024-09-28 08:55:28.208562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:50.236 [2024-09-28 08:55:28.208568] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:50.236 [2024-09-28 08:55:28.208646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.236 pt2 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.236 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.495 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.495 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.495 "name": "raid_bdev1", 00:17:50.495 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:50.495 "strip_size_kb": 0, 00:17:50.495 "state": "online", 00:17:50.495 "raid_level": "raid1", 00:17:50.495 "superblock": true, 00:17:50.495 "num_base_bdevs": 2, 00:17:50.495 "num_base_bdevs_discovered": 1, 00:17:50.495 "num_base_bdevs_operational": 1, 00:17:50.495 "base_bdevs_list": [ 00:17:50.495 { 00:17:50.495 "name": null, 00:17:50.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.495 "is_configured": false, 00:17:50.495 "data_offset": 256, 00:17:50.495 "data_size": 7936 00:17:50.495 }, 00:17:50.495 { 00:17:50.495 "name": "pt2", 00:17:50.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.495 "is_configured": true, 00:17:50.495 "data_offset": 256, 00:17:50.495 "data_size": 7936 00:17:50.495 } 00:17:50.495 ] 00:17:50.495 }' 00:17:50.495 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.495 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.754 [2024-09-28 08:55:28.645245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.754 [2024-09-28 08:55:28.645309] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.754 [2024-09-28 08:55:28.645366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.754 [2024-09-28 08:55:28.645412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.754 [2024-09-28 08:55:28.645441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:50.754 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.755 [2024-09-28 08:55:28.709166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.755 [2024-09-28 08:55:28.709244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.755 [2024-09-28 08:55:28.709272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:50.755 [2024-09-28 08:55:28.709294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.755 [2024-09-28 08:55:28.711360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.755 [2024-09-28 08:55:28.711422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.755 [2024-09-28 08:55:28.711488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.755 [2024-09-28 08:55:28.711534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.755 [2024-09-28 08:55:28.711664] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:50.755 [2024-09-28 08:55:28.711711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.755 [2024-09-28 08:55:28.711744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:50.755 [2024-09-28 08:55:28.711867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.755 [2024-09-28 08:55:28.711951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:50.755 [2024-09-28 08:55:28.711985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.755 [2024-09-28 08:55:28.712058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:50.755 [2024-09-28 08:55:28.712175] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:50.755 [2024-09-28 08:55:28.712210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:50.755 [2024-09-28 08:55:28.712330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.755 pt1 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.755 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.014 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.014 "name": "raid_bdev1", 00:17:51.014 "uuid": "1241fc3e-f5a2-4366-9a89-e2c47bf8d568", 00:17:51.014 "strip_size_kb": 0, 00:17:51.014 "state": "online", 00:17:51.014 "raid_level": "raid1", 00:17:51.014 "superblock": true, 00:17:51.014 "num_base_bdevs": 2, 00:17:51.014 "num_base_bdevs_discovered": 1, 00:17:51.014 "num_base_bdevs_operational": 1, 00:17:51.014 "base_bdevs_list": [ 00:17:51.014 { 00:17:51.014 "name": null, 00:17:51.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.014 "is_configured": false, 00:17:51.014 "data_offset": 256, 00:17:51.014 "data_size": 7936 00:17:51.014 }, 00:17:51.014 { 00:17:51.014 "name": "pt2", 00:17:51.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.014 "is_configured": true, 00:17:51.014 "data_offset": 256, 00:17:51.014 "data_size": 7936 00:17:51.014 } 00:17:51.014 ] 00:17:51.014 }' 00:17:51.014 08:55:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.014 08:55:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.273 [2024-09-28 08:55:29.248398] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.273 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 1241fc3e-f5a2-4366-9a89-e2c47bf8d568 '!=' 1241fc3e-f5a2-4366-9a89-e2c47bf8d568 ']' 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87473 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87473 ']' 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87473 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87473 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.532 killing process with pid 87473 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87473' 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87473 00:17:51.532 [2024-09-28 08:55:29.308129] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.532 [2024-09-28 08:55:29.308185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.532 [2024-09-28 08:55:29.308213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.532 [2024-09-28 08:55:29.308226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:51.532 08:55:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87473 00:17:51.791 [2024-09-28 08:55:29.535094] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.177 08:55:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:53.177 00:17:53.177 real 0m6.311s 00:17:53.177 user 0m9.284s 00:17:53.177 sys 0m1.214s 00:17:53.177 ************************************ 00:17:53.177 END TEST raid_superblock_test_md_separate 00:17:53.177 ************************************ 00:17:53.177 08:55:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.177 08:55:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.177 08:55:30 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:53.177 08:55:30 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:53.177 08:55:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:53.177 08:55:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.177 08:55:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.177 ************************************ 00:17:53.177 START TEST raid_rebuild_test_sb_md_separate 00:17:53.177 ************************************ 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87807 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87807 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87807 ']' 00:17:53.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.177 08:55:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:53.177 Zero copy mechanism will not be used. 00:17:53.177 [2024-09-28 08:55:31.022157] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:53.177 [2024-09-28 08:55:31.022295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87807 ] 00:17:53.435 [2024-09-28 08:55:31.185056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.435 [2024-09-28 08:55:31.423569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.693 [2024-09-28 08:55:31.655356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.693 [2024-09-28 08:55:31.655400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.952 BaseBdev1_malloc 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.952 [2024-09-28 08:55:31.882805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.952 [2024-09-28 08:55:31.882945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.952 [2024-09-28 08:55:31.882991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.952 [2024-09-28 08:55:31.883037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.952 [2024-09-28 08:55:31.885222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.952 [2024-09-28 08:55:31.885296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.952 BaseBdev1 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.952 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.211 BaseBdev2_malloc 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.211 [2024-09-28 08:55:31.985412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:54.211 [2024-09-28 08:55:31.985529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.211 [2024-09-28 08:55:31.985564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.211 [2024-09-28 08:55:31.985594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.211 [2024-09-28 08:55:31.987692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.211 [2024-09-28 08:55:31.987761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:54.211 BaseBdev2 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.211 08:55:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.211 spare_malloc 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.211 spare_delay 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.211 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.211 [2024-09-28 08:55:32.059605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.212 [2024-09-28 08:55:32.059672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.212 [2024-09-28 08:55:32.059693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:54.212 [2024-09-28 08:55:32.059704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.212 [2024-09-28 08:55:32.061852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.212 [2024-09-28 08:55:32.061888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.212 spare 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.212 [2024-09-28 08:55:32.071661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.212 [2024-09-28 08:55:32.073691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.212 [2024-09-28 08:55:32.073874] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:54.212 [2024-09-28 08:55:32.073889] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.212 [2024-09-28 08:55:32.073959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:54.212 [2024-09-28 08:55:32.074074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:54.212 [2024-09-28 08:55:32.074089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:54.212 [2024-09-28 08:55:32.074187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.212 "name": "raid_bdev1", 00:17:54.212 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:54.212 "strip_size_kb": 0, 00:17:54.212 "state": "online", 00:17:54.212 "raid_level": "raid1", 00:17:54.212 "superblock": true, 00:17:54.212 "num_base_bdevs": 2, 00:17:54.212 "num_base_bdevs_discovered": 2, 00:17:54.212 "num_base_bdevs_operational": 2, 00:17:54.212 "base_bdevs_list": [ 00:17:54.212 { 00:17:54.212 "name": "BaseBdev1", 00:17:54.212 "uuid": "42714143-3ee7-5258-9141-abad12a3178a", 00:17:54.212 "is_configured": true, 00:17:54.212 "data_offset": 256, 00:17:54.212 "data_size": 7936 00:17:54.212 }, 00:17:54.212 { 00:17:54.212 "name": "BaseBdev2", 00:17:54.212 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:54.212 "is_configured": true, 00:17:54.212 "data_offset": 256, 00:17:54.212 "data_size": 7936 00:17:54.212 } 00:17:54.212 ] 00:17:54.212 }' 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.212 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.780 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.780 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:54.780 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.781 [2024-09-28 08:55:32.523138] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.781 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:54.781 [2024-09-28 08:55:32.746571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:54.781 /dev/nbd0 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:55.040 1+0 records in 00:17:55.040 1+0 records out 00:17:55.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428706 s, 9.6 MB/s 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:55.040 08:55:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:55.608 7936+0 records in 00:17:55.608 7936+0 records out 00:17:55.608 32505856 bytes (33 MB, 31 MiB) copied, 0.610117 s, 53.3 MB/s 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.608 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:55.867 [2024-09-28 08:55:33.642759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.867 [2024-09-28 08:55:33.670807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.867 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.867 "name": "raid_bdev1", 00:17:55.867 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:55.867 "strip_size_kb": 0, 00:17:55.867 "state": "online", 00:17:55.867 "raid_level": "raid1", 00:17:55.867 "superblock": true, 00:17:55.867 "num_base_bdevs": 2, 00:17:55.867 "num_base_bdevs_discovered": 1, 00:17:55.867 "num_base_bdevs_operational": 1, 00:17:55.867 "base_bdevs_list": [ 00:17:55.867 { 00:17:55.867 "name": null, 00:17:55.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.867 "is_configured": false, 00:17:55.867 "data_offset": 0, 00:17:55.867 "data_size": 7936 00:17:55.867 }, 00:17:55.867 { 00:17:55.868 "name": "BaseBdev2", 00:17:55.868 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:55.868 "is_configured": true, 00:17:55.868 "data_offset": 256, 00:17:55.868 "data_size": 7936 00:17:55.868 } 00:17:55.868 ] 00:17:55.868 }' 00:17:55.868 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.868 08:55:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.126 08:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.126 08:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.126 08:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.126 [2024-09-28 08:55:34.090136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.126 [2024-09-28 08:55:34.104338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:56.126 08:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.126 08:55:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:56.126 [2024-09-28 08:55:34.106332] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.502 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.503 "name": "raid_bdev1", 00:17:57.503 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:57.503 "strip_size_kb": 0, 00:17:57.503 "state": "online", 00:17:57.503 "raid_level": "raid1", 00:17:57.503 "superblock": true, 00:17:57.503 "num_base_bdevs": 2, 00:17:57.503 "num_base_bdevs_discovered": 2, 00:17:57.503 "num_base_bdevs_operational": 2, 00:17:57.503 "process": { 00:17:57.503 "type": "rebuild", 00:17:57.503 "target": "spare", 00:17:57.503 "progress": { 00:17:57.503 "blocks": 2560, 00:17:57.503 "percent": 32 00:17:57.503 } 00:17:57.503 }, 00:17:57.503 "base_bdevs_list": [ 00:17:57.503 { 00:17:57.503 "name": "spare", 00:17:57.503 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:17:57.503 "is_configured": true, 00:17:57.503 "data_offset": 256, 00:17:57.503 "data_size": 7936 00:17:57.503 }, 00:17:57.503 { 00:17:57.503 "name": "BaseBdev2", 00:17:57.503 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:57.503 "is_configured": true, 00:17:57.503 "data_offset": 256, 00:17:57.503 "data_size": 7936 00:17:57.503 } 00:17:57.503 ] 00:17:57.503 }' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.503 [2024-09-28 08:55:35.262159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.503 [2024-09-28 08:55:35.314918] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.503 [2024-09-28 08:55:35.314974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.503 [2024-09-28 08:55:35.314988] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.503 [2024-09-28 08:55:35.315014] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.503 "name": "raid_bdev1", 00:17:57.503 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:57.503 "strip_size_kb": 0, 00:17:57.503 "state": "online", 00:17:57.503 "raid_level": "raid1", 00:17:57.503 "superblock": true, 00:17:57.503 "num_base_bdevs": 2, 00:17:57.503 "num_base_bdevs_discovered": 1, 00:17:57.503 "num_base_bdevs_operational": 1, 00:17:57.503 "base_bdevs_list": [ 00:17:57.503 { 00:17:57.503 "name": null, 00:17:57.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.503 "is_configured": false, 00:17:57.503 "data_offset": 0, 00:17:57.503 "data_size": 7936 00:17:57.503 }, 00:17:57.503 { 00:17:57.503 "name": "BaseBdev2", 00:17:57.503 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:57.503 "is_configured": true, 00:17:57.503 "data_offset": 256, 00:17:57.503 "data_size": 7936 00:17:57.503 } 00:17:57.503 ] 00:17:57.503 }' 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.503 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.076 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.076 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.076 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.077 "name": "raid_bdev1", 00:17:58.077 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:58.077 "strip_size_kb": 0, 00:17:58.077 "state": "online", 00:17:58.077 "raid_level": "raid1", 00:17:58.077 "superblock": true, 00:17:58.077 "num_base_bdevs": 2, 00:17:58.077 "num_base_bdevs_discovered": 1, 00:17:58.077 "num_base_bdevs_operational": 1, 00:17:58.077 "base_bdevs_list": [ 00:17:58.077 { 00:17:58.077 "name": null, 00:17:58.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.077 "is_configured": false, 00:17:58.077 "data_offset": 0, 00:17:58.077 "data_size": 7936 00:17:58.077 }, 00:17:58.077 { 00:17:58.077 "name": "BaseBdev2", 00:17:58.077 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:58.077 "is_configured": true, 00:17:58.077 "data_offset": 256, 00:17:58.077 "data_size": 7936 00:17:58.077 } 00:17:58.077 ] 00:17:58.077 }' 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.077 [2024-09-28 08:55:35.937980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.077 [2024-09-28 08:55:35.951608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.077 08:55:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.077 [2024-09-28 08:55:35.953703] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.053 08:55:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.053 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.053 "name": "raid_bdev1", 00:17:59.053 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:59.053 "strip_size_kb": 0, 00:17:59.053 "state": "online", 00:17:59.053 "raid_level": "raid1", 00:17:59.053 "superblock": true, 00:17:59.053 "num_base_bdevs": 2, 00:17:59.053 "num_base_bdevs_discovered": 2, 00:17:59.053 "num_base_bdevs_operational": 2, 00:17:59.053 "process": { 00:17:59.053 "type": "rebuild", 00:17:59.053 "target": "spare", 00:17:59.053 "progress": { 00:17:59.053 "blocks": 2560, 00:17:59.053 "percent": 32 00:17:59.053 } 00:17:59.053 }, 00:17:59.053 "base_bdevs_list": [ 00:17:59.053 { 00:17:59.053 "name": "spare", 00:17:59.053 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:17:59.053 "is_configured": true, 00:17:59.053 "data_offset": 256, 00:17:59.053 "data_size": 7936 00:17:59.053 }, 00:17:59.053 { 00:17:59.053 "name": "BaseBdev2", 00:17:59.053 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:59.053 "is_configured": true, 00:17:59.053 "data_offset": 256, 00:17:59.053 "data_size": 7936 00:17:59.053 } 00:17:59.053 ] 00:17:59.053 }' 00:17:59.053 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.053 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.053 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:59.313 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=717 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.313 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.314 "name": "raid_bdev1", 00:17:59.314 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:17:59.314 "strip_size_kb": 0, 00:17:59.314 "state": "online", 00:17:59.314 "raid_level": "raid1", 00:17:59.314 "superblock": true, 00:17:59.314 "num_base_bdevs": 2, 00:17:59.314 "num_base_bdevs_discovered": 2, 00:17:59.314 "num_base_bdevs_operational": 2, 00:17:59.314 "process": { 00:17:59.314 "type": "rebuild", 00:17:59.314 "target": "spare", 00:17:59.314 "progress": { 00:17:59.314 "blocks": 2816, 00:17:59.314 "percent": 35 00:17:59.314 } 00:17:59.314 }, 00:17:59.314 "base_bdevs_list": [ 00:17:59.314 { 00:17:59.314 "name": "spare", 00:17:59.314 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:17:59.314 "is_configured": true, 00:17:59.314 "data_offset": 256, 00:17:59.314 "data_size": 7936 00:17:59.314 }, 00:17:59.314 { 00:17:59.314 "name": "BaseBdev2", 00:17:59.314 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:17:59.314 "is_configured": true, 00:17:59.314 "data_offset": 256, 00:17:59.314 "data_size": 7936 00:17:59.314 } 00:17:59.314 ] 00:17:59.314 }' 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.314 08:55:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.250 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.509 "name": "raid_bdev1", 00:18:00.509 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:00.509 "strip_size_kb": 0, 00:18:00.509 "state": "online", 00:18:00.509 "raid_level": "raid1", 00:18:00.509 "superblock": true, 00:18:00.509 "num_base_bdevs": 2, 00:18:00.509 "num_base_bdevs_discovered": 2, 00:18:00.509 "num_base_bdevs_operational": 2, 00:18:00.509 "process": { 00:18:00.509 "type": "rebuild", 00:18:00.509 "target": "spare", 00:18:00.509 "progress": { 00:18:00.509 "blocks": 5632, 00:18:00.509 "percent": 70 00:18:00.509 } 00:18:00.509 }, 00:18:00.509 "base_bdevs_list": [ 00:18:00.509 { 00:18:00.509 "name": "spare", 00:18:00.509 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:00.509 "is_configured": true, 00:18:00.509 "data_offset": 256, 00:18:00.509 "data_size": 7936 00:18:00.509 }, 00:18:00.509 { 00:18:00.509 "name": "BaseBdev2", 00:18:00.509 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:00.509 "is_configured": true, 00:18:00.509 "data_offset": 256, 00:18:00.509 "data_size": 7936 00:18:00.509 } 00:18:00.509 ] 00:18:00.509 }' 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.509 08:55:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.449 [2024-09-28 08:55:39.074538] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.449 [2024-09-28 08:55:39.074613] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.449 [2024-09-28 08:55:39.074730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.449 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.449 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.449 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.449 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.449 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.450 "name": "raid_bdev1", 00:18:01.450 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:01.450 "strip_size_kb": 0, 00:18:01.450 "state": "online", 00:18:01.450 "raid_level": "raid1", 00:18:01.450 "superblock": true, 00:18:01.450 "num_base_bdevs": 2, 00:18:01.450 "num_base_bdevs_discovered": 2, 00:18:01.450 "num_base_bdevs_operational": 2, 00:18:01.450 "base_bdevs_list": [ 00:18:01.450 { 00:18:01.450 "name": "spare", 00:18:01.450 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:01.450 "is_configured": true, 00:18:01.450 "data_offset": 256, 00:18:01.450 "data_size": 7936 00:18:01.450 }, 00:18:01.450 { 00:18:01.450 "name": "BaseBdev2", 00:18:01.450 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:01.450 "is_configured": true, 00:18:01.450 "data_offset": 256, 00:18:01.450 "data_size": 7936 00:18:01.450 } 00:18:01.450 ] 00:18:01.450 }' 00:18:01.450 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.708 "name": "raid_bdev1", 00:18:01.708 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:01.708 "strip_size_kb": 0, 00:18:01.708 "state": "online", 00:18:01.708 "raid_level": "raid1", 00:18:01.708 "superblock": true, 00:18:01.708 "num_base_bdevs": 2, 00:18:01.708 "num_base_bdevs_discovered": 2, 00:18:01.708 "num_base_bdevs_operational": 2, 00:18:01.708 "base_bdevs_list": [ 00:18:01.708 { 00:18:01.708 "name": "spare", 00:18:01.708 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:01.708 "is_configured": true, 00:18:01.708 "data_offset": 256, 00:18:01.708 "data_size": 7936 00:18:01.708 }, 00:18:01.708 { 00:18:01.708 "name": "BaseBdev2", 00:18:01.708 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:01.708 "is_configured": true, 00:18:01.708 "data_offset": 256, 00:18:01.708 "data_size": 7936 00:18:01.708 } 00:18:01.708 ] 00:18:01.708 }' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.708 "name": "raid_bdev1", 00:18:01.708 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:01.708 "strip_size_kb": 0, 00:18:01.708 "state": "online", 00:18:01.708 "raid_level": "raid1", 00:18:01.708 "superblock": true, 00:18:01.708 "num_base_bdevs": 2, 00:18:01.708 "num_base_bdevs_discovered": 2, 00:18:01.708 "num_base_bdevs_operational": 2, 00:18:01.708 "base_bdevs_list": [ 00:18:01.708 { 00:18:01.708 "name": "spare", 00:18:01.708 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:01.708 "is_configured": true, 00:18:01.708 "data_offset": 256, 00:18:01.708 "data_size": 7936 00:18:01.708 }, 00:18:01.708 { 00:18:01.708 "name": "BaseBdev2", 00:18:01.708 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:01.708 "is_configured": true, 00:18:01.708 "data_offset": 256, 00:18:01.708 "data_size": 7936 00:18:01.708 } 00:18:01.708 ] 00:18:01.708 }' 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.708 08:55:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.276 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.276 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.276 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.276 [2024-09-28 08:55:40.052362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.277 [2024-09-28 08:55:40.052453] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.277 [2024-09-28 08:55:40.052564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.277 [2024-09-28 08:55:40.052670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.277 [2024-09-28 08:55:40.052737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.277 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:02.537 /dev/nbd0 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.537 1+0 records in 00:18:02.537 1+0 records out 00:18:02.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051663 s, 7.9 MB/s 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.537 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:02.796 /dev/nbd1 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.796 1+0 records in 00:18:02.796 1+0 records out 00:18:02.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353813 s, 11.6 MB/s 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:02.796 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.797 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.797 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.797 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:02.797 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.797 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.056 08:55:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.316 [2024-09-28 08:55:41.212672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.316 [2024-09-28 08:55:41.212723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.316 [2024-09-28 08:55:41.212746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:03.316 [2024-09-28 08:55:41.212756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.316 [2024-09-28 08:55:41.214877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.316 [2024-09-28 08:55:41.214913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.316 [2024-09-28 08:55:41.214968] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.316 [2024-09-28 08:55:41.215031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.316 [2024-09-28 08:55:41.215154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.316 spare 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.316 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.577 [2024-09-28 08:55:41.315069] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:03.577 [2024-09-28 08:55:41.315098] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.577 [2024-09-28 08:55:41.315213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:03.577 [2024-09-28 08:55:41.315334] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:03.577 [2024-09-28 08:55:41.315342] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:03.577 [2024-09-28 08:55:41.315449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.577 "name": "raid_bdev1", 00:18:03.577 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:03.577 "strip_size_kb": 0, 00:18:03.577 "state": "online", 00:18:03.577 "raid_level": "raid1", 00:18:03.577 "superblock": true, 00:18:03.577 "num_base_bdevs": 2, 00:18:03.577 "num_base_bdevs_discovered": 2, 00:18:03.577 "num_base_bdevs_operational": 2, 00:18:03.577 "base_bdevs_list": [ 00:18:03.577 { 00:18:03.577 "name": "spare", 00:18:03.577 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:03.577 "is_configured": true, 00:18:03.577 "data_offset": 256, 00:18:03.577 "data_size": 7936 00:18:03.577 }, 00:18:03.577 { 00:18:03.577 "name": "BaseBdev2", 00:18:03.577 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:03.577 "is_configured": true, 00:18:03.577 "data_offset": 256, 00:18:03.577 "data_size": 7936 00:18:03.577 } 00:18:03.577 ] 00:18:03.577 }' 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.577 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.836 "name": "raid_bdev1", 00:18:03.836 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:03.836 "strip_size_kb": 0, 00:18:03.836 "state": "online", 00:18:03.836 "raid_level": "raid1", 00:18:03.836 "superblock": true, 00:18:03.836 "num_base_bdevs": 2, 00:18:03.836 "num_base_bdevs_discovered": 2, 00:18:03.836 "num_base_bdevs_operational": 2, 00:18:03.836 "base_bdevs_list": [ 00:18:03.836 { 00:18:03.836 "name": "spare", 00:18:03.836 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:03.836 "is_configured": true, 00:18:03.836 "data_offset": 256, 00:18:03.836 "data_size": 7936 00:18:03.836 }, 00:18:03.836 { 00:18:03.836 "name": "BaseBdev2", 00:18:03.836 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:03.836 "is_configured": true, 00:18:03.836 "data_offset": 256, 00:18:03.836 "data_size": 7936 00:18:03.836 } 00:18:03.836 ] 00:18:03.836 }' 00:18:03.836 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 [2024-09-28 08:55:41.943626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.095 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.096 "name": "raid_bdev1", 00:18:04.096 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:04.096 "strip_size_kb": 0, 00:18:04.096 "state": "online", 00:18:04.096 "raid_level": "raid1", 00:18:04.096 "superblock": true, 00:18:04.096 "num_base_bdevs": 2, 00:18:04.096 "num_base_bdevs_discovered": 1, 00:18:04.096 "num_base_bdevs_operational": 1, 00:18:04.096 "base_bdevs_list": [ 00:18:04.096 { 00:18:04.096 "name": null, 00:18:04.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.096 "is_configured": false, 00:18:04.096 "data_offset": 0, 00:18:04.096 "data_size": 7936 00:18:04.096 }, 00:18:04.096 { 00:18:04.096 "name": "BaseBdev2", 00:18:04.096 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:04.096 "is_configured": true, 00:18:04.096 "data_offset": 256, 00:18:04.096 "data_size": 7936 00:18:04.096 } 00:18:04.096 ] 00:18:04.096 }' 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.096 08:55:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.664 08:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.664 08:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.664 08:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.664 [2024-09-28 08:55:42.395089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.664 [2024-09-28 08:55:42.395207] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.664 [2024-09-28 08:55:42.395227] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.664 [2024-09-28 08:55:42.395262] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.664 [2024-09-28 08:55:42.408690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:04.664 08:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.664 08:55:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:04.664 [2024-09-28 08:55:42.410696] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.601 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.601 "name": "raid_bdev1", 00:18:05.601 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:05.601 "strip_size_kb": 0, 00:18:05.601 "state": "online", 00:18:05.601 "raid_level": "raid1", 00:18:05.601 "superblock": true, 00:18:05.601 "num_base_bdevs": 2, 00:18:05.601 "num_base_bdevs_discovered": 2, 00:18:05.601 "num_base_bdevs_operational": 2, 00:18:05.601 "process": { 00:18:05.601 "type": "rebuild", 00:18:05.601 "target": "spare", 00:18:05.601 "progress": { 00:18:05.601 "blocks": 2560, 00:18:05.601 "percent": 32 00:18:05.601 } 00:18:05.601 }, 00:18:05.601 "base_bdevs_list": [ 00:18:05.601 { 00:18:05.601 "name": "spare", 00:18:05.601 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:05.601 "is_configured": true, 00:18:05.601 "data_offset": 256, 00:18:05.601 "data_size": 7936 00:18:05.601 }, 00:18:05.601 { 00:18:05.601 "name": "BaseBdev2", 00:18:05.601 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:05.601 "is_configured": true, 00:18:05.601 "data_offset": 256, 00:18:05.601 "data_size": 7936 00:18:05.602 } 00:18:05.602 ] 00:18:05.602 }' 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.602 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.602 [2024-09-28 08:55:43.571190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.860 [2024-09-28 08:55:43.619037] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.860 [2024-09-28 08:55:43.619107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.860 [2024-09-28 08:55:43.619120] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.860 [2024-09-28 08:55:43.619130] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.860 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.861 "name": "raid_bdev1", 00:18:05.861 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:05.861 "strip_size_kb": 0, 00:18:05.861 "state": "online", 00:18:05.861 "raid_level": "raid1", 00:18:05.861 "superblock": true, 00:18:05.861 "num_base_bdevs": 2, 00:18:05.861 "num_base_bdevs_discovered": 1, 00:18:05.861 "num_base_bdevs_operational": 1, 00:18:05.861 "base_bdevs_list": [ 00:18:05.861 { 00:18:05.861 "name": null, 00:18:05.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.861 "is_configured": false, 00:18:05.861 "data_offset": 0, 00:18:05.861 "data_size": 7936 00:18:05.861 }, 00:18:05.861 { 00:18:05.861 "name": "BaseBdev2", 00:18:05.861 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:05.861 "is_configured": true, 00:18:05.861 "data_offset": 256, 00:18:05.861 "data_size": 7936 00:18:05.861 } 00:18:05.861 ] 00:18:05.861 }' 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.861 08:55:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.120 08:55:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.120 08:55:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.120 08:55:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.120 [2024-09-28 08:55:44.082330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.120 [2024-09-28 08:55:44.082384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.120 [2024-09-28 08:55:44.082409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:06.120 [2024-09-28 08:55:44.082421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.120 [2024-09-28 08:55:44.082679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.120 [2024-09-28 08:55:44.082701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.120 [2024-09-28 08:55:44.082751] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.120 [2024-09-28 08:55:44.082767] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.120 [2024-09-28 08:55:44.082776] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.120 [2024-09-28 08:55:44.082803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.120 [2024-09-28 08:55:44.095971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:06.120 spare 00:18:06.120 08:55:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.120 08:55:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:06.120 [2024-09-28 08:55:44.098049] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.499 "name": "raid_bdev1", 00:18:07.499 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:07.499 "strip_size_kb": 0, 00:18:07.499 "state": "online", 00:18:07.499 "raid_level": "raid1", 00:18:07.499 "superblock": true, 00:18:07.499 "num_base_bdevs": 2, 00:18:07.499 "num_base_bdevs_discovered": 2, 00:18:07.499 "num_base_bdevs_operational": 2, 00:18:07.499 "process": { 00:18:07.499 "type": "rebuild", 00:18:07.499 "target": "spare", 00:18:07.499 "progress": { 00:18:07.499 "blocks": 2560, 00:18:07.499 "percent": 32 00:18:07.499 } 00:18:07.499 }, 00:18:07.499 "base_bdevs_list": [ 00:18:07.499 { 00:18:07.499 "name": "spare", 00:18:07.499 "uuid": "94efe887-54a6-5c93-a690-073a46c4a64e", 00:18:07.499 "is_configured": true, 00:18:07.499 "data_offset": 256, 00:18:07.499 "data_size": 7936 00:18:07.499 }, 00:18:07.499 { 00:18:07.499 "name": "BaseBdev2", 00:18:07.499 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:07.499 "is_configured": true, 00:18:07.499 "data_offset": 256, 00:18:07.499 "data_size": 7936 00:18:07.499 } 00:18:07.499 ] 00:18:07.499 }' 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.499 [2024-09-28 08:55:45.230581] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.499 [2024-09-28 08:55:45.305731] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.499 [2024-09-28 08:55:45.305783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.499 [2024-09-28 08:55:45.305800] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.499 [2024-09-28 08:55:45.305807] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.499 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.500 "name": "raid_bdev1", 00:18:07.500 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:07.500 "strip_size_kb": 0, 00:18:07.500 "state": "online", 00:18:07.500 "raid_level": "raid1", 00:18:07.500 "superblock": true, 00:18:07.500 "num_base_bdevs": 2, 00:18:07.500 "num_base_bdevs_discovered": 1, 00:18:07.500 "num_base_bdevs_operational": 1, 00:18:07.500 "base_bdevs_list": [ 00:18:07.500 { 00:18:07.500 "name": null, 00:18:07.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.500 "is_configured": false, 00:18:07.500 "data_offset": 0, 00:18:07.500 "data_size": 7936 00:18:07.500 }, 00:18:07.500 { 00:18:07.500 "name": "BaseBdev2", 00:18:07.500 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:07.500 "is_configured": true, 00:18:07.500 "data_offset": 256, 00:18:07.500 "data_size": 7936 00:18:07.500 } 00:18:07.500 ] 00:18:07.500 }' 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.500 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.071 "name": "raid_bdev1", 00:18:08.071 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:08.071 "strip_size_kb": 0, 00:18:08.071 "state": "online", 00:18:08.071 "raid_level": "raid1", 00:18:08.071 "superblock": true, 00:18:08.071 "num_base_bdevs": 2, 00:18:08.071 "num_base_bdevs_discovered": 1, 00:18:08.071 "num_base_bdevs_operational": 1, 00:18:08.071 "base_bdevs_list": [ 00:18:08.071 { 00:18:08.071 "name": null, 00:18:08.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.071 "is_configured": false, 00:18:08.071 "data_offset": 0, 00:18:08.071 "data_size": 7936 00:18:08.071 }, 00:18:08.071 { 00:18:08.071 "name": "BaseBdev2", 00:18:08.071 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:08.071 "is_configured": true, 00:18:08.071 "data_offset": 256, 00:18:08.071 "data_size": 7936 00:18:08.071 } 00:18:08.071 ] 00:18:08.071 }' 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.071 [2024-09-28 08:55:45.947950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.071 [2024-09-28 08:55:45.948000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.071 [2024-09-28 08:55:45.948025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:08.071 [2024-09-28 08:55:45.948034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.071 [2024-09-28 08:55:45.948258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.071 [2024-09-28 08:55:45.948277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.071 [2024-09-28 08:55:45.948322] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:08.071 [2024-09-28 08:55:45.948336] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.071 [2024-09-28 08:55:45.948355] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.071 [2024-09-28 08:55:45.948364] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:08.071 BaseBdev1 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.071 08:55:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.007 08:55:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.266 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.266 "name": "raid_bdev1", 00:18:09.266 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:09.266 "strip_size_kb": 0, 00:18:09.266 "state": "online", 00:18:09.266 "raid_level": "raid1", 00:18:09.266 "superblock": true, 00:18:09.266 "num_base_bdevs": 2, 00:18:09.266 "num_base_bdevs_discovered": 1, 00:18:09.266 "num_base_bdevs_operational": 1, 00:18:09.266 "base_bdevs_list": [ 00:18:09.266 { 00:18:09.266 "name": null, 00:18:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.266 "is_configured": false, 00:18:09.266 "data_offset": 0, 00:18:09.266 "data_size": 7936 00:18:09.266 }, 00:18:09.266 { 00:18:09.266 "name": "BaseBdev2", 00:18:09.266 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:09.266 "is_configured": true, 00:18:09.266 "data_offset": 256, 00:18:09.266 "data_size": 7936 00:18:09.266 } 00:18:09.266 ] 00:18:09.266 }' 00:18:09.266 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.266 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.526 "name": "raid_bdev1", 00:18:09.526 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:09.526 "strip_size_kb": 0, 00:18:09.526 "state": "online", 00:18:09.526 "raid_level": "raid1", 00:18:09.526 "superblock": true, 00:18:09.526 "num_base_bdevs": 2, 00:18:09.526 "num_base_bdevs_discovered": 1, 00:18:09.526 "num_base_bdevs_operational": 1, 00:18:09.526 "base_bdevs_list": [ 00:18:09.526 { 00:18:09.526 "name": null, 00:18:09.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.526 "is_configured": false, 00:18:09.526 "data_offset": 0, 00:18:09.526 "data_size": 7936 00:18:09.526 }, 00:18:09.526 { 00:18:09.526 "name": "BaseBdev2", 00:18:09.526 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:09.526 "is_configured": true, 00:18:09.526 "data_offset": 256, 00:18:09.526 "data_size": 7936 00:18:09.526 } 00:18:09.526 ] 00:18:09.526 }' 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.526 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 [2024-09-28 08:55:47.541300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.786 [2024-09-28 08:55:47.541414] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.786 [2024-09-28 08:55:47.541429] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.786 request: 00:18:09.786 { 00:18:09.786 "base_bdev": "BaseBdev1", 00:18:09.786 "raid_bdev": "raid_bdev1", 00:18:09.786 "method": "bdev_raid_add_base_bdev", 00:18:09.786 "req_id": 1 00:18:09.786 } 00:18:09.786 Got JSON-RPC error response 00:18:09.786 response: 00:18:09.786 { 00:18:09.786 "code": -22, 00:18:09.786 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:09.786 } 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.786 08:55:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.725 "name": "raid_bdev1", 00:18:10.725 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:10.725 "strip_size_kb": 0, 00:18:10.725 "state": "online", 00:18:10.725 "raid_level": "raid1", 00:18:10.725 "superblock": true, 00:18:10.725 "num_base_bdevs": 2, 00:18:10.725 "num_base_bdevs_discovered": 1, 00:18:10.725 "num_base_bdevs_operational": 1, 00:18:10.725 "base_bdevs_list": [ 00:18:10.725 { 00:18:10.725 "name": null, 00:18:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.725 "is_configured": false, 00:18:10.725 "data_offset": 0, 00:18:10.725 "data_size": 7936 00:18:10.725 }, 00:18:10.725 { 00:18:10.725 "name": "BaseBdev2", 00:18:10.725 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:10.725 "is_configured": true, 00:18:10.725 "data_offset": 256, 00:18:10.725 "data_size": 7936 00:18:10.725 } 00:18:10.725 ] 00:18:10.725 }' 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.725 08:55:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.294 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.294 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.295 "name": "raid_bdev1", 00:18:11.295 "uuid": "deb1fd81-356b-4fac-aafb-020d86c17355", 00:18:11.295 "strip_size_kb": 0, 00:18:11.295 "state": "online", 00:18:11.295 "raid_level": "raid1", 00:18:11.295 "superblock": true, 00:18:11.295 "num_base_bdevs": 2, 00:18:11.295 "num_base_bdevs_discovered": 1, 00:18:11.295 "num_base_bdevs_operational": 1, 00:18:11.295 "base_bdevs_list": [ 00:18:11.295 { 00:18:11.295 "name": null, 00:18:11.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.295 "is_configured": false, 00:18:11.295 "data_offset": 0, 00:18:11.295 "data_size": 7936 00:18:11.295 }, 00:18:11.295 { 00:18:11.295 "name": "BaseBdev2", 00:18:11.295 "uuid": "3fac9295-c3e4-50db-b34a-a8ce1b808ba6", 00:18:11.295 "is_configured": true, 00:18:11.295 "data_offset": 256, 00:18:11.295 "data_size": 7936 00:18:11.295 } 00:18:11.295 ] 00:18:11.295 }' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87807 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87807 ']' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87807 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87807 00:18:11.295 killing process with pid 87807 00:18:11.295 Received shutdown signal, test time was about 60.000000 seconds 00:18:11.295 00:18:11.295 Latency(us) 00:18:11.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.295 =================================================================================================================== 00:18:11.295 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87807' 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87807 00:18:11.295 [2024-09-28 08:55:49.198480] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.295 08:55:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87807 00:18:11.295 [2024-09-28 08:55:49.198583] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.295 [2024-09-28 08:55:49.198625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.295 [2024-09-28 08:55:49.198636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:11.555 [2024-09-28 08:55:49.526271] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.936 08:55:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:12.936 00:18:12.936 real 0m19.894s 00:18:12.936 user 0m25.756s 00:18:12.936 sys 0m2.663s 00:18:12.936 08:55:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.936 08:55:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.936 ************************************ 00:18:12.936 END TEST raid_rebuild_test_sb_md_separate 00:18:12.936 ************************************ 00:18:12.936 08:55:50 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:12.936 08:55:50 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:12.936 08:55:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:12.936 08:55:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.936 08:55:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.936 ************************************ 00:18:12.936 START TEST raid_state_function_test_sb_md_interleaved 00:18:12.936 ************************************ 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88493 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:12.936 Process raid pid: 88493 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88493' 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88493 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88493 ']' 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.936 08:55:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.196 [2024-09-28 08:55:50.989340] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:13.196 [2024-09-28 08:55:50.990062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.196 [2024-09-28 08:55:51.155229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.455 [2024-09-28 08:55:51.403812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.715 [2024-09-28 08:55:51.639137] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.715 [2024-09-28 08:55:51.639174] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.975 [2024-09-28 08:55:51.803538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.975 [2024-09-28 08:55:51.803597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.975 [2024-09-28 08:55:51.803607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.975 [2024-09-28 08:55:51.803618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.975 "name": "Existed_Raid", 00:18:13.975 "uuid": "cf73dba8-b80e-45ab-b854-3c8620534f42", 00:18:13.975 "strip_size_kb": 0, 00:18:13.975 "state": "configuring", 00:18:13.975 "raid_level": "raid1", 00:18:13.975 "superblock": true, 00:18:13.975 "num_base_bdevs": 2, 00:18:13.975 "num_base_bdevs_discovered": 0, 00:18:13.975 "num_base_bdevs_operational": 2, 00:18:13.975 "base_bdevs_list": [ 00:18:13.975 { 00:18:13.975 "name": "BaseBdev1", 00:18:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.975 "is_configured": false, 00:18:13.975 "data_offset": 0, 00:18:13.975 "data_size": 0 00:18:13.975 }, 00:18:13.975 { 00:18:13.975 "name": "BaseBdev2", 00:18:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.975 "is_configured": false, 00:18:13.975 "data_offset": 0, 00:18:13.975 "data_size": 0 00:18:13.975 } 00:18:13.975 ] 00:18:13.975 }' 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.975 08:55:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.545 [2024-09-28 08:55:52.274590] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.545 [2024-09-28 08:55:52.274628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.545 [2024-09-28 08:55:52.286591] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.545 [2024-09-28 08:55:52.286628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.545 [2024-09-28 08:55:52.286635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.545 [2024-09-28 08:55:52.286647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.545 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 [2024-09-28 08:55:52.350925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.546 BaseBdev1 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 [ 00:18:14.546 { 00:18:14.546 "name": "BaseBdev1", 00:18:14.546 "aliases": [ 00:18:14.546 "f0d21d96-9f7c-45fe-af2c-ff7c863eb571" 00:18:14.546 ], 00:18:14.546 "product_name": "Malloc disk", 00:18:14.546 "block_size": 4128, 00:18:14.546 "num_blocks": 8192, 00:18:14.546 "uuid": "f0d21d96-9f7c-45fe-af2c-ff7c863eb571", 00:18:14.546 "md_size": 32, 00:18:14.546 "md_interleave": true, 00:18:14.546 "dif_type": 0, 00:18:14.546 "assigned_rate_limits": { 00:18:14.546 "rw_ios_per_sec": 0, 00:18:14.546 "rw_mbytes_per_sec": 0, 00:18:14.546 "r_mbytes_per_sec": 0, 00:18:14.546 "w_mbytes_per_sec": 0 00:18:14.546 }, 00:18:14.546 "claimed": true, 00:18:14.546 "claim_type": "exclusive_write", 00:18:14.546 "zoned": false, 00:18:14.546 "supported_io_types": { 00:18:14.546 "read": true, 00:18:14.546 "write": true, 00:18:14.546 "unmap": true, 00:18:14.546 "flush": true, 00:18:14.546 "reset": true, 00:18:14.546 "nvme_admin": false, 00:18:14.546 "nvme_io": false, 00:18:14.546 "nvme_io_md": false, 00:18:14.546 "write_zeroes": true, 00:18:14.546 "zcopy": true, 00:18:14.546 "get_zone_info": false, 00:18:14.546 "zone_management": false, 00:18:14.546 "zone_append": false, 00:18:14.546 "compare": false, 00:18:14.546 "compare_and_write": false, 00:18:14.546 "abort": true, 00:18:14.546 "seek_hole": false, 00:18:14.546 "seek_data": false, 00:18:14.546 "copy": true, 00:18:14.546 "nvme_iov_md": false 00:18:14.546 }, 00:18:14.546 "memory_domains": [ 00:18:14.546 { 00:18:14.546 "dma_device_id": "system", 00:18:14.546 "dma_device_type": 1 00:18:14.546 }, 00:18:14.546 { 00:18:14.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.546 "dma_device_type": 2 00:18:14.546 } 00:18:14.546 ], 00:18:14.546 "driver_specific": {} 00:18:14.546 } 00:18:14.546 ] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.546 "name": "Existed_Raid", 00:18:14.546 "uuid": "84936a61-f819-4c20-a002-6526bd7702b2", 00:18:14.546 "strip_size_kb": 0, 00:18:14.546 "state": "configuring", 00:18:14.546 "raid_level": "raid1", 00:18:14.546 "superblock": true, 00:18:14.546 "num_base_bdevs": 2, 00:18:14.546 "num_base_bdevs_discovered": 1, 00:18:14.546 "num_base_bdevs_operational": 2, 00:18:14.546 "base_bdevs_list": [ 00:18:14.546 { 00:18:14.546 "name": "BaseBdev1", 00:18:14.546 "uuid": "f0d21d96-9f7c-45fe-af2c-ff7c863eb571", 00:18:14.546 "is_configured": true, 00:18:14.546 "data_offset": 256, 00:18:14.546 "data_size": 7936 00:18:14.546 }, 00:18:14.546 { 00:18:14.546 "name": "BaseBdev2", 00:18:14.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.546 "is_configured": false, 00:18:14.546 "data_offset": 0, 00:18:14.546 "data_size": 0 00:18:14.546 } 00:18:14.546 ] 00:18:14.546 }' 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.546 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 [2024-09-28 08:55:52.790250] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.806 [2024-09-28 08:55:52.790291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.806 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 [2024-09-28 08:55:52.798328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.066 [2024-09-28 08:55:52.800410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.066 [2024-09-28 08:55:52.800468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.066 "name": "Existed_Raid", 00:18:15.066 "uuid": "b1fef276-dbad-415b-a3cc-29a874c70d7f", 00:18:15.066 "strip_size_kb": 0, 00:18:15.066 "state": "configuring", 00:18:15.066 "raid_level": "raid1", 00:18:15.066 "superblock": true, 00:18:15.066 "num_base_bdevs": 2, 00:18:15.066 "num_base_bdevs_discovered": 1, 00:18:15.066 "num_base_bdevs_operational": 2, 00:18:15.066 "base_bdevs_list": [ 00:18:15.066 { 00:18:15.066 "name": "BaseBdev1", 00:18:15.066 "uuid": "f0d21d96-9f7c-45fe-af2c-ff7c863eb571", 00:18:15.066 "is_configured": true, 00:18:15.066 "data_offset": 256, 00:18:15.066 "data_size": 7936 00:18:15.066 }, 00:18:15.066 { 00:18:15.066 "name": "BaseBdev2", 00:18:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.066 "is_configured": false, 00:18:15.066 "data_offset": 0, 00:18:15.066 "data_size": 0 00:18:15.066 } 00:18:15.066 ] 00:18:15.066 }' 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.066 08:55:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.326 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:15.326 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.326 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.326 [2024-09-28 08:55:53.304861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.326 [2024-09-28 08:55:53.305067] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.326 [2024-09-28 08:55:53.305081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:15.326 [2024-09-28 08:55:53.305184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:15.327 [2024-09-28 08:55:53.305263] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.327 [2024-09-28 08:55:53.305278] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:15.327 [2024-09-28 08:55:53.305339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.327 BaseBdev2 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.327 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.586 [ 00:18:15.586 { 00:18:15.586 "name": "BaseBdev2", 00:18:15.586 "aliases": [ 00:18:15.586 "15507b64-1d85-4218-8aa4-743a0a538d46" 00:18:15.586 ], 00:18:15.586 "product_name": "Malloc disk", 00:18:15.586 "block_size": 4128, 00:18:15.586 "num_blocks": 8192, 00:18:15.586 "uuid": "15507b64-1d85-4218-8aa4-743a0a538d46", 00:18:15.586 "md_size": 32, 00:18:15.586 "md_interleave": true, 00:18:15.586 "dif_type": 0, 00:18:15.586 "assigned_rate_limits": { 00:18:15.586 "rw_ios_per_sec": 0, 00:18:15.586 "rw_mbytes_per_sec": 0, 00:18:15.586 "r_mbytes_per_sec": 0, 00:18:15.586 "w_mbytes_per_sec": 0 00:18:15.586 }, 00:18:15.586 "claimed": true, 00:18:15.586 "claim_type": "exclusive_write", 00:18:15.586 "zoned": false, 00:18:15.586 "supported_io_types": { 00:18:15.586 "read": true, 00:18:15.586 "write": true, 00:18:15.586 "unmap": true, 00:18:15.586 "flush": true, 00:18:15.586 "reset": true, 00:18:15.586 "nvme_admin": false, 00:18:15.586 "nvme_io": false, 00:18:15.586 "nvme_io_md": false, 00:18:15.586 "write_zeroes": true, 00:18:15.586 "zcopy": true, 00:18:15.586 "get_zone_info": false, 00:18:15.586 "zone_management": false, 00:18:15.586 "zone_append": false, 00:18:15.586 "compare": false, 00:18:15.586 "compare_and_write": false, 00:18:15.586 "abort": true, 00:18:15.586 "seek_hole": false, 00:18:15.586 "seek_data": false, 00:18:15.586 "copy": true, 00:18:15.586 "nvme_iov_md": false 00:18:15.586 }, 00:18:15.586 "memory_domains": [ 00:18:15.586 { 00:18:15.586 "dma_device_id": "system", 00:18:15.586 "dma_device_type": 1 00:18:15.586 }, 00:18:15.586 { 00:18:15.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.586 "dma_device_type": 2 00:18:15.586 } 00:18:15.586 ], 00:18:15.586 "driver_specific": {} 00:18:15.586 } 00:18:15.586 ] 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.586 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.587 "name": "Existed_Raid", 00:18:15.587 "uuid": "b1fef276-dbad-415b-a3cc-29a874c70d7f", 00:18:15.587 "strip_size_kb": 0, 00:18:15.587 "state": "online", 00:18:15.587 "raid_level": "raid1", 00:18:15.587 "superblock": true, 00:18:15.587 "num_base_bdevs": 2, 00:18:15.587 "num_base_bdevs_discovered": 2, 00:18:15.587 "num_base_bdevs_operational": 2, 00:18:15.587 "base_bdevs_list": [ 00:18:15.587 { 00:18:15.587 "name": "BaseBdev1", 00:18:15.587 "uuid": "f0d21d96-9f7c-45fe-af2c-ff7c863eb571", 00:18:15.587 "is_configured": true, 00:18:15.587 "data_offset": 256, 00:18:15.587 "data_size": 7936 00:18:15.587 }, 00:18:15.587 { 00:18:15.587 "name": "BaseBdev2", 00:18:15.587 "uuid": "15507b64-1d85-4218-8aa4-743a0a538d46", 00:18:15.587 "is_configured": true, 00:18:15.587 "data_offset": 256, 00:18:15.587 "data_size": 7936 00:18:15.587 } 00:18:15.587 ] 00:18:15.587 }' 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.587 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.846 [2024-09-28 08:55:53.788328] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.846 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.846 "name": "Existed_Raid", 00:18:15.846 "aliases": [ 00:18:15.846 "b1fef276-dbad-415b-a3cc-29a874c70d7f" 00:18:15.846 ], 00:18:15.846 "product_name": "Raid Volume", 00:18:15.846 "block_size": 4128, 00:18:15.846 "num_blocks": 7936, 00:18:15.846 "uuid": "b1fef276-dbad-415b-a3cc-29a874c70d7f", 00:18:15.846 "md_size": 32, 00:18:15.846 "md_interleave": true, 00:18:15.846 "dif_type": 0, 00:18:15.846 "assigned_rate_limits": { 00:18:15.846 "rw_ios_per_sec": 0, 00:18:15.846 "rw_mbytes_per_sec": 0, 00:18:15.846 "r_mbytes_per_sec": 0, 00:18:15.846 "w_mbytes_per_sec": 0 00:18:15.846 }, 00:18:15.846 "claimed": false, 00:18:15.846 "zoned": false, 00:18:15.846 "supported_io_types": { 00:18:15.846 "read": true, 00:18:15.846 "write": true, 00:18:15.846 "unmap": false, 00:18:15.846 "flush": false, 00:18:15.846 "reset": true, 00:18:15.846 "nvme_admin": false, 00:18:15.846 "nvme_io": false, 00:18:15.846 "nvme_io_md": false, 00:18:15.846 "write_zeroes": true, 00:18:15.846 "zcopy": false, 00:18:15.846 "get_zone_info": false, 00:18:15.846 "zone_management": false, 00:18:15.846 "zone_append": false, 00:18:15.846 "compare": false, 00:18:15.846 "compare_and_write": false, 00:18:15.846 "abort": false, 00:18:15.846 "seek_hole": false, 00:18:15.846 "seek_data": false, 00:18:15.846 "copy": false, 00:18:15.846 "nvme_iov_md": false 00:18:15.846 }, 00:18:15.846 "memory_domains": [ 00:18:15.846 { 00:18:15.846 "dma_device_id": "system", 00:18:15.846 "dma_device_type": 1 00:18:15.846 }, 00:18:15.846 { 00:18:15.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.846 "dma_device_type": 2 00:18:15.846 }, 00:18:15.846 { 00:18:15.846 "dma_device_id": "system", 00:18:15.846 "dma_device_type": 1 00:18:15.846 }, 00:18:15.846 { 00:18:15.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.846 "dma_device_type": 2 00:18:15.846 } 00:18:15.846 ], 00:18:15.846 "driver_specific": { 00:18:15.846 "raid": { 00:18:15.846 "uuid": "b1fef276-dbad-415b-a3cc-29a874c70d7f", 00:18:15.846 "strip_size_kb": 0, 00:18:15.846 "state": "online", 00:18:15.846 "raid_level": "raid1", 00:18:15.846 "superblock": true, 00:18:15.846 "num_base_bdevs": 2, 00:18:15.846 "num_base_bdevs_discovered": 2, 00:18:15.846 "num_base_bdevs_operational": 2, 00:18:15.846 "base_bdevs_list": [ 00:18:15.846 { 00:18:15.846 "name": "BaseBdev1", 00:18:15.846 "uuid": "f0d21d96-9f7c-45fe-af2c-ff7c863eb571", 00:18:15.846 "is_configured": true, 00:18:15.846 "data_offset": 256, 00:18:15.846 "data_size": 7936 00:18:15.846 }, 00:18:15.846 { 00:18:15.847 "name": "BaseBdev2", 00:18:15.847 "uuid": "15507b64-1d85-4218-8aa4-743a0a538d46", 00:18:15.847 "is_configured": true, 00:18:15.847 "data_offset": 256, 00:18:15.847 "data_size": 7936 00:18:15.847 } 00:18:15.847 ] 00:18:15.847 } 00:18:15.847 } 00:18:15.847 }' 00:18:15.847 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:16.107 BaseBdev2' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.107 08:55:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.107 [2024-09-28 08:55:53.995781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.107 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.367 "name": "Existed_Raid", 00:18:16.367 "uuid": "b1fef276-dbad-415b-a3cc-29a874c70d7f", 00:18:16.367 "strip_size_kb": 0, 00:18:16.367 "state": "online", 00:18:16.367 "raid_level": "raid1", 00:18:16.367 "superblock": true, 00:18:16.367 "num_base_bdevs": 2, 00:18:16.367 "num_base_bdevs_discovered": 1, 00:18:16.367 "num_base_bdevs_operational": 1, 00:18:16.367 "base_bdevs_list": [ 00:18:16.367 { 00:18:16.367 "name": null, 00:18:16.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.367 "is_configured": false, 00:18:16.367 "data_offset": 0, 00:18:16.367 "data_size": 7936 00:18:16.367 }, 00:18:16.367 { 00:18:16.367 "name": "BaseBdev2", 00:18:16.367 "uuid": "15507b64-1d85-4218-8aa4-743a0a538d46", 00:18:16.367 "is_configured": true, 00:18:16.367 "data_offset": 256, 00:18:16.367 "data_size": 7936 00:18:16.367 } 00:18:16.367 ] 00:18:16.367 }' 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.367 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.627 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.627 [2024-09-28 08:55:54.613605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.627 [2024-09-28 08:55:54.613790] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.886 [2024-09-28 08:55:54.714802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.886 [2024-09-28 08:55:54.714917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.886 [2024-09-28 08:55:54.714959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88493 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88493 ']' 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88493 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:16.886 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88493 00:18:16.887 killing process with pid 88493 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88493' 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88493 00:18:16.887 [2024-09-28 08:55:54.813549] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.887 08:55:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88493 00:18:16.887 [2024-09-28 08:55:54.829503] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.268 08:55:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:18.268 00:18:18.268 real 0m5.251s 00:18:18.268 user 0m7.354s 00:18:18.268 sys 0m0.984s 00:18:18.268 08:55:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:18.268 ************************************ 00:18:18.268 END TEST raid_state_function_test_sb_md_interleaved 00:18:18.268 ************************************ 00:18:18.268 08:55:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.268 08:55:56 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:18.268 08:55:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:18.268 08:55:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:18.268 08:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.268 ************************************ 00:18:18.268 START TEST raid_superblock_test_md_interleaved 00:18:18.268 ************************************ 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88747 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88747 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88747 ']' 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.268 08:55:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.527 [2024-09-28 08:55:56.317772] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:18.527 [2024-09-28 08:55:56.317978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88747 ] 00:18:18.527 [2024-09-28 08:55:56.482992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.786 [2024-09-28 08:55:56.726325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.045 [2024-09-28 08:55:56.953675] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.045 [2024-09-28 08:55:56.953709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.305 malloc1 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.305 [2024-09-28 08:55:57.203700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.305 [2024-09-28 08:55:57.203838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.305 [2024-09-28 08:55:57.203881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:19.305 [2024-09-28 08:55:57.203910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.305 [2024-09-28 08:55:57.205900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.305 [2024-09-28 08:55:57.205963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.305 pt1 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.305 malloc2 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.305 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.305 [2024-09-28 08:55:57.295837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:19.305 [2024-09-28 08:55:57.295944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.305 [2024-09-28 08:55:57.295985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:19.305 [2024-09-28 08:55:57.296014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.305 [2024-09-28 08:55:57.298148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.305 [2024-09-28 08:55:57.298228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:19.565 pt2 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.565 [2024-09-28 08:55:57.307888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.565 [2024-09-28 08:55:57.309919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:19.565 [2024-09-28 08:55:57.310101] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:19.565 [2024-09-28 08:55:57.310116] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:19.565 [2024-09-28 08:55:57.310188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:19.565 [2024-09-28 08:55:57.310249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:19.565 [2024-09-28 08:55:57.310262] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:19.565 [2024-09-28 08:55:57.310328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.565 "name": "raid_bdev1", 00:18:19.565 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:19.565 "strip_size_kb": 0, 00:18:19.565 "state": "online", 00:18:19.565 "raid_level": "raid1", 00:18:19.565 "superblock": true, 00:18:19.565 "num_base_bdevs": 2, 00:18:19.565 "num_base_bdevs_discovered": 2, 00:18:19.565 "num_base_bdevs_operational": 2, 00:18:19.565 "base_bdevs_list": [ 00:18:19.565 { 00:18:19.565 "name": "pt1", 00:18:19.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.565 "is_configured": true, 00:18:19.565 "data_offset": 256, 00:18:19.565 "data_size": 7936 00:18:19.565 }, 00:18:19.565 { 00:18:19.565 "name": "pt2", 00:18:19.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.565 "is_configured": true, 00:18:19.565 "data_offset": 256, 00:18:19.565 "data_size": 7936 00:18:19.565 } 00:18:19.565 ] 00:18:19.565 }' 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.565 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.824 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.824 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.825 [2024-09-28 08:55:57.671503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.825 "name": "raid_bdev1", 00:18:19.825 "aliases": [ 00:18:19.825 "08f28bd0-b4b6-42e4-812c-f046486b2513" 00:18:19.825 ], 00:18:19.825 "product_name": "Raid Volume", 00:18:19.825 "block_size": 4128, 00:18:19.825 "num_blocks": 7936, 00:18:19.825 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:19.825 "md_size": 32, 00:18:19.825 "md_interleave": true, 00:18:19.825 "dif_type": 0, 00:18:19.825 "assigned_rate_limits": { 00:18:19.825 "rw_ios_per_sec": 0, 00:18:19.825 "rw_mbytes_per_sec": 0, 00:18:19.825 "r_mbytes_per_sec": 0, 00:18:19.825 "w_mbytes_per_sec": 0 00:18:19.825 }, 00:18:19.825 "claimed": false, 00:18:19.825 "zoned": false, 00:18:19.825 "supported_io_types": { 00:18:19.825 "read": true, 00:18:19.825 "write": true, 00:18:19.825 "unmap": false, 00:18:19.825 "flush": false, 00:18:19.825 "reset": true, 00:18:19.825 "nvme_admin": false, 00:18:19.825 "nvme_io": false, 00:18:19.825 "nvme_io_md": false, 00:18:19.825 "write_zeroes": true, 00:18:19.825 "zcopy": false, 00:18:19.825 "get_zone_info": false, 00:18:19.825 "zone_management": false, 00:18:19.825 "zone_append": false, 00:18:19.825 "compare": false, 00:18:19.825 "compare_and_write": false, 00:18:19.825 "abort": false, 00:18:19.825 "seek_hole": false, 00:18:19.825 "seek_data": false, 00:18:19.825 "copy": false, 00:18:19.825 "nvme_iov_md": false 00:18:19.825 }, 00:18:19.825 "memory_domains": [ 00:18:19.825 { 00:18:19.825 "dma_device_id": "system", 00:18:19.825 "dma_device_type": 1 00:18:19.825 }, 00:18:19.825 { 00:18:19.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.825 "dma_device_type": 2 00:18:19.825 }, 00:18:19.825 { 00:18:19.825 "dma_device_id": "system", 00:18:19.825 "dma_device_type": 1 00:18:19.825 }, 00:18:19.825 { 00:18:19.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.825 "dma_device_type": 2 00:18:19.825 } 00:18:19.825 ], 00:18:19.825 "driver_specific": { 00:18:19.825 "raid": { 00:18:19.825 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:19.825 "strip_size_kb": 0, 00:18:19.825 "state": "online", 00:18:19.825 "raid_level": "raid1", 00:18:19.825 "superblock": true, 00:18:19.825 "num_base_bdevs": 2, 00:18:19.825 "num_base_bdevs_discovered": 2, 00:18:19.825 "num_base_bdevs_operational": 2, 00:18:19.825 "base_bdevs_list": [ 00:18:19.825 { 00:18:19.825 "name": "pt1", 00:18:19.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.825 "is_configured": true, 00:18:19.825 "data_offset": 256, 00:18:19.825 "data_size": 7936 00:18:19.825 }, 00:18:19.825 { 00:18:19.825 "name": "pt2", 00:18:19.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.825 "is_configured": true, 00:18:19.825 "data_offset": 256, 00:18:19.825 "data_size": 7936 00:18:19.825 } 00:18:19.825 ] 00:18:19.825 } 00:18:19.825 } 00:18:19.825 }' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.825 pt2' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.825 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.084 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.084 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.084 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 [2024-09-28 08:55:57.903069] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=08f28bd0-b4b6-42e4-812c-f046486b2513 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 08f28bd0-b4b6-42e4-812c-f046486b2513 ']' 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 [2024-09-28 08:55:57.946756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.085 [2024-09-28 08:55:57.946814] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.085 [2024-09-28 08:55:57.946896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.085 [2024-09-28 08:55:57.946969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.085 [2024-09-28 08:55:57.947002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:20.085 08:55:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:20.085 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.344 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.344 [2024-09-28 08:55:58.094618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:20.344 [2024-09-28 08:55:58.096581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:20.344 [2024-09-28 08:55:58.096706] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:20.344 [2024-09-28 08:55:58.096804] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:20.344 [2024-09-28 08:55:58.096865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.344 [2024-09-28 08:55:58.096911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:20.344 request: 00:18:20.344 { 00:18:20.344 "name": "raid_bdev1", 00:18:20.344 "raid_level": "raid1", 00:18:20.344 "base_bdevs": [ 00:18:20.344 "malloc1", 00:18:20.344 "malloc2" 00:18:20.344 ], 00:18:20.344 "superblock": false, 00:18:20.344 "method": "bdev_raid_create", 00:18:20.344 "req_id": 1 00:18:20.344 } 00:18:20.344 Got JSON-RPC error response 00:18:20.344 response: 00:18:20.344 { 00:18:20.345 "code": -17, 00:18:20.345 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:20.345 } 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.345 [2024-09-28 08:55:58.154468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.345 [2024-09-28 08:55:58.154512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.345 [2024-09-28 08:55:58.154526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:20.345 [2024-09-28 08:55:58.154536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.345 [2024-09-28 08:55:58.156527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.345 [2024-09-28 08:55:58.156563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.345 [2024-09-28 08:55:58.156602] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:20.345 [2024-09-28 08:55:58.156677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:20.345 pt1 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.345 "name": "raid_bdev1", 00:18:20.345 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:20.345 "strip_size_kb": 0, 00:18:20.345 "state": "configuring", 00:18:20.345 "raid_level": "raid1", 00:18:20.345 "superblock": true, 00:18:20.345 "num_base_bdevs": 2, 00:18:20.345 "num_base_bdevs_discovered": 1, 00:18:20.345 "num_base_bdevs_operational": 2, 00:18:20.345 "base_bdevs_list": [ 00:18:20.345 { 00:18:20.345 "name": "pt1", 00:18:20.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.345 "is_configured": true, 00:18:20.345 "data_offset": 256, 00:18:20.345 "data_size": 7936 00:18:20.345 }, 00:18:20.345 { 00:18:20.345 "name": null, 00:18:20.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.345 "is_configured": false, 00:18:20.345 "data_offset": 256, 00:18:20.345 "data_size": 7936 00:18:20.345 } 00:18:20.345 ] 00:18:20.345 }' 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.345 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.602 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:20.603 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:20.603 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.603 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.603 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.603 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.603 [2024-09-28 08:55:58.593732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.603 [2024-09-28 08:55:58.593781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.603 [2024-09-28 08:55:58.593797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.603 [2024-09-28 08:55:58.593807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.603 [2024-09-28 08:55:58.593915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.603 [2024-09-28 08:55:58.593937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.603 [2024-09-28 08:55:58.593972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.603 [2024-09-28 08:55:58.593999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.603 [2024-09-28 08:55:58.594078] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.603 [2024-09-28 08:55:58.594104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.603 [2024-09-28 08:55:58.594168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:20.603 [2024-09-28 08:55:58.594226] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.603 [2024-09-28 08:55:58.594233] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:20.603 [2024-09-28 08:55:58.594285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.861 pt2 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.861 "name": "raid_bdev1", 00:18:20.861 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:20.861 "strip_size_kb": 0, 00:18:20.861 "state": "online", 00:18:20.861 "raid_level": "raid1", 00:18:20.861 "superblock": true, 00:18:20.861 "num_base_bdevs": 2, 00:18:20.861 "num_base_bdevs_discovered": 2, 00:18:20.861 "num_base_bdevs_operational": 2, 00:18:20.861 "base_bdevs_list": [ 00:18:20.861 { 00:18:20.861 "name": "pt1", 00:18:20.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.861 "is_configured": true, 00:18:20.861 "data_offset": 256, 00:18:20.861 "data_size": 7936 00:18:20.861 }, 00:18:20.861 { 00:18:20.861 "name": "pt2", 00:18:20.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.861 "is_configured": true, 00:18:20.861 "data_offset": 256, 00:18:20.861 "data_size": 7936 00:18:20.861 } 00:18:20.861 ] 00:18:20.861 }' 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.861 08:55:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:21.120 [2024-09-28 08:55:59.053145] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.120 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:21.120 "name": "raid_bdev1", 00:18:21.120 "aliases": [ 00:18:21.120 "08f28bd0-b4b6-42e4-812c-f046486b2513" 00:18:21.120 ], 00:18:21.120 "product_name": "Raid Volume", 00:18:21.120 "block_size": 4128, 00:18:21.120 "num_blocks": 7936, 00:18:21.121 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:21.121 "md_size": 32, 00:18:21.121 "md_interleave": true, 00:18:21.121 "dif_type": 0, 00:18:21.121 "assigned_rate_limits": { 00:18:21.121 "rw_ios_per_sec": 0, 00:18:21.121 "rw_mbytes_per_sec": 0, 00:18:21.121 "r_mbytes_per_sec": 0, 00:18:21.121 "w_mbytes_per_sec": 0 00:18:21.121 }, 00:18:21.121 "claimed": false, 00:18:21.121 "zoned": false, 00:18:21.121 "supported_io_types": { 00:18:21.121 "read": true, 00:18:21.121 "write": true, 00:18:21.121 "unmap": false, 00:18:21.121 "flush": false, 00:18:21.121 "reset": true, 00:18:21.121 "nvme_admin": false, 00:18:21.121 "nvme_io": false, 00:18:21.121 "nvme_io_md": false, 00:18:21.121 "write_zeroes": true, 00:18:21.121 "zcopy": false, 00:18:21.121 "get_zone_info": false, 00:18:21.121 "zone_management": false, 00:18:21.121 "zone_append": false, 00:18:21.121 "compare": false, 00:18:21.121 "compare_and_write": false, 00:18:21.121 "abort": false, 00:18:21.121 "seek_hole": false, 00:18:21.121 "seek_data": false, 00:18:21.121 "copy": false, 00:18:21.121 "nvme_iov_md": false 00:18:21.121 }, 00:18:21.121 "memory_domains": [ 00:18:21.121 { 00:18:21.121 "dma_device_id": "system", 00:18:21.121 "dma_device_type": 1 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.121 "dma_device_type": 2 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "dma_device_id": "system", 00:18:21.121 "dma_device_type": 1 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.121 "dma_device_type": 2 00:18:21.121 } 00:18:21.121 ], 00:18:21.121 "driver_specific": { 00:18:21.121 "raid": { 00:18:21.121 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:21.121 "strip_size_kb": 0, 00:18:21.121 "state": "online", 00:18:21.121 "raid_level": "raid1", 00:18:21.121 "superblock": true, 00:18:21.121 "num_base_bdevs": 2, 00:18:21.121 "num_base_bdevs_discovered": 2, 00:18:21.121 "num_base_bdevs_operational": 2, 00:18:21.121 "base_bdevs_list": [ 00:18:21.121 { 00:18:21.121 "name": "pt1", 00:18:21.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.121 "is_configured": true, 00:18:21.121 "data_offset": 256, 00:18:21.121 "data_size": 7936 00:18:21.121 }, 00:18:21.121 { 00:18:21.121 "name": "pt2", 00:18:21.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.121 "is_configured": true, 00:18:21.121 "data_offset": 256, 00:18:21.121 "data_size": 7936 00:18:21.121 } 00:18:21.121 ] 00:18:21.121 } 00:18:21.121 } 00:18:21.121 }' 00:18:21.121 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:21.380 pt2' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:21.380 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.381 [2024-09-28 08:55:59.296802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 08f28bd0-b4b6-42e4-812c-f046486b2513 '!=' 08f28bd0-b4b6-42e4-812c-f046486b2513 ']' 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.381 [2024-09-28 08:55:59.344538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.381 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.640 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.640 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.640 "name": "raid_bdev1", 00:18:21.640 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:21.640 "strip_size_kb": 0, 00:18:21.640 "state": "online", 00:18:21.640 "raid_level": "raid1", 00:18:21.640 "superblock": true, 00:18:21.640 "num_base_bdevs": 2, 00:18:21.640 "num_base_bdevs_discovered": 1, 00:18:21.640 "num_base_bdevs_operational": 1, 00:18:21.640 "base_bdevs_list": [ 00:18:21.640 { 00:18:21.640 "name": null, 00:18:21.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.640 "is_configured": false, 00:18:21.640 "data_offset": 0, 00:18:21.640 "data_size": 7936 00:18:21.640 }, 00:18:21.640 { 00:18:21.640 "name": "pt2", 00:18:21.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.640 "is_configured": true, 00:18:21.640 "data_offset": 256, 00:18:21.640 "data_size": 7936 00:18:21.640 } 00:18:21.640 ] 00:18:21.640 }' 00:18:21.640 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.640 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 [2024-09-28 08:55:59.803716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.900 [2024-09-28 08:55:59.803739] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.900 [2024-09-28 08:55:59.803795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.900 [2024-09-28 08:55:59.803831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.900 [2024-09-28 08:55:59.803842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.900 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.900 [2024-09-28 08:55:59.859632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.900 [2024-09-28 08:55:59.859686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.900 [2024-09-28 08:55:59.859699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:21.900 [2024-09-28 08:55:59.859710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.901 [2024-09-28 08:55:59.861781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.901 [2024-09-28 08:55:59.861812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.901 [2024-09-28 08:55:59.861855] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.901 [2024-09-28 08:55:59.861897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.901 [2024-09-28 08:55:59.861950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:21.901 [2024-09-28 08:55:59.861962] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.901 [2024-09-28 08:55:59.862060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:21.901 [2024-09-28 08:55:59.862122] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:21.901 [2024-09-28 08:55:59.862131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:21.901 [2024-09-28 08:55:59.862183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.901 pt2 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.901 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.160 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.160 "name": "raid_bdev1", 00:18:22.160 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:22.160 "strip_size_kb": 0, 00:18:22.160 "state": "online", 00:18:22.160 "raid_level": "raid1", 00:18:22.160 "superblock": true, 00:18:22.160 "num_base_bdevs": 2, 00:18:22.160 "num_base_bdevs_discovered": 1, 00:18:22.160 "num_base_bdevs_operational": 1, 00:18:22.160 "base_bdevs_list": [ 00:18:22.160 { 00:18:22.160 "name": null, 00:18:22.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.160 "is_configured": false, 00:18:22.160 "data_offset": 256, 00:18:22.160 "data_size": 7936 00:18:22.160 }, 00:18:22.160 { 00:18:22.160 "name": "pt2", 00:18:22.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.160 "is_configured": true, 00:18:22.160 "data_offset": 256, 00:18:22.160 "data_size": 7936 00:18:22.160 } 00:18:22.160 ] 00:18:22.160 }' 00:18:22.160 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.160 08:55:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.420 [2024-09-28 08:56:00.282906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.420 [2024-09-28 08:56:00.282930] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:22.420 [2024-09-28 08:56:00.282973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.420 [2024-09-28 08:56:00.283010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.420 [2024-09-28 08:56:00.283018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.420 [2024-09-28 08:56:00.342835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.420 [2024-09-28 08:56:00.342875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.420 [2024-09-28 08:56:00.342890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:22.420 [2024-09-28 08:56:00.342898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.420 [2024-09-28 08:56:00.344970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.420 [2024-09-28 08:56:00.345001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.420 [2024-09-28 08:56:00.345042] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:22.420 [2024-09-28 08:56:00.345081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.420 [2024-09-28 08:56:00.345155] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:22.420 [2024-09-28 08:56:00.345170] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:22.420 [2024-09-28 08:56:00.345188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:22.420 [2024-09-28 08:56:00.345240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.420 [2024-09-28 08:56:00.345309] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:22.420 [2024-09-28 08:56:00.345317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.420 [2024-09-28 08:56:00.345374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:22.420 [2024-09-28 08:56:00.345427] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:22.420 [2024-09-28 08:56:00.345437] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:22.420 [2024-09-28 08:56:00.345498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.420 pt1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.420 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.421 "name": "raid_bdev1", 00:18:22.421 "uuid": "08f28bd0-b4b6-42e4-812c-f046486b2513", 00:18:22.421 "strip_size_kb": 0, 00:18:22.421 "state": "online", 00:18:22.421 "raid_level": "raid1", 00:18:22.421 "superblock": true, 00:18:22.421 "num_base_bdevs": 2, 00:18:22.421 "num_base_bdevs_discovered": 1, 00:18:22.421 "num_base_bdevs_operational": 1, 00:18:22.421 "base_bdevs_list": [ 00:18:22.421 { 00:18:22.421 "name": null, 00:18:22.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.421 "is_configured": false, 00:18:22.421 "data_offset": 256, 00:18:22.421 "data_size": 7936 00:18:22.421 }, 00:18:22.421 { 00:18:22.421 "name": "pt2", 00:18:22.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.421 "is_configured": true, 00:18:22.421 "data_offset": 256, 00:18:22.421 "data_size": 7936 00:18:22.421 } 00:18:22.421 ] 00:18:22.421 }' 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.421 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:22.990 [2024-09-28 08:56:00.842179] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 08f28bd0-b4b6-42e4-812c-f046486b2513 '!=' 08f28bd0-b4b6-42e4-812c-f046486b2513 ']' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88747 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88747 ']' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88747 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88747 00:18:22.990 killing process with pid 88747 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88747' 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88747 00:18:22.990 [2024-09-28 08:56:00.926400] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.990 [2024-09-28 08:56:00.926467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.990 [2024-09-28 08:56:00.926504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.990 [2024-09-28 08:56:00.926519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:22.990 08:56:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88747 00:18:23.250 [2024-09-28 08:56:01.142777] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.633 ************************************ 00:18:24.633 END TEST raid_superblock_test_md_interleaved 00:18:24.633 ************************************ 00:18:24.633 08:56:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:24.633 00:18:24.633 real 0m6.233s 00:18:24.633 user 0m9.180s 00:18:24.633 sys 0m1.146s 00:18:24.633 08:56:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.633 08:56:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.633 08:56:02 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:24.633 08:56:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:24.633 08:56:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.633 08:56:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.633 ************************************ 00:18:24.633 START TEST raid_rebuild_test_sb_md_interleaved 00:18:24.633 ************************************ 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89070 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89070 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89070 ']' 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.633 08:56:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:24.893 Zero copy mechanism will not be used. 00:18:24.893 [2024-09-28 08:56:02.634878] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:24.893 [2024-09-28 08:56:02.634998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89070 ] 00:18:24.893 [2024-09-28 08:56:02.797321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.153 [2024-09-28 08:56:03.040741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.413 [2024-09-28 08:56:03.266511] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.413 [2024-09-28 08:56:03.266552] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 BaseBdev1_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 [2024-09-28 08:56:03.517086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:25.676 [2024-09-28 08:56:03.517155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.676 [2024-09-28 08:56:03.517178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:25.676 [2024-09-28 08:56:03.517189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.676 [2024-09-28 08:56:03.519196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.676 [2024-09-28 08:56:03.519229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:25.676 BaseBdev1 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 BaseBdev2_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 [2024-09-28 08:56:03.587293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:25.676 [2024-09-28 08:56:03.587349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.676 [2024-09-28 08:56:03.587367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:25.676 [2024-09-28 08:56:03.587379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.676 [2024-09-28 08:56:03.589355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.676 [2024-09-28 08:56:03.589387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:25.676 BaseBdev2 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 spare_malloc 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 spare_delay 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.676 [2024-09-28 08:56:03.658573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.676 [2024-09-28 08:56:03.658628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.676 [2024-09-28 08:56:03.658647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:25.676 [2024-09-28 08:56:03.658668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.676 [2024-09-28 08:56:03.660722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.676 [2024-09-28 08:56:03.660754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.676 spare 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.676 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.955 [2024-09-28 08:56:03.670617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:25.955 [2024-09-28 08:56:03.672680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.955 [2024-09-28 08:56:03.672877] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:25.955 [2024-09-28 08:56:03.672893] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:25.955 [2024-09-28 08:56:03.672965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:25.955 [2024-09-28 08:56:03.673035] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:25.955 [2024-09-28 08:56:03.673044] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:25.955 [2024-09-28 08:56:03.673111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.955 "name": "raid_bdev1", 00:18:25.955 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:25.955 "strip_size_kb": 0, 00:18:25.955 "state": "online", 00:18:25.955 "raid_level": "raid1", 00:18:25.955 "superblock": true, 00:18:25.955 "num_base_bdevs": 2, 00:18:25.955 "num_base_bdevs_discovered": 2, 00:18:25.955 "num_base_bdevs_operational": 2, 00:18:25.955 "base_bdevs_list": [ 00:18:25.955 { 00:18:25.955 "name": "BaseBdev1", 00:18:25.955 "uuid": "71319545-a318-5771-bef3-f5ad7eca26c5", 00:18:25.955 "is_configured": true, 00:18:25.955 "data_offset": 256, 00:18:25.955 "data_size": 7936 00:18:25.955 }, 00:18:25.955 { 00:18:25.955 "name": "BaseBdev2", 00:18:25.955 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:25.955 "is_configured": true, 00:18:25.955 "data_offset": 256, 00:18:25.955 "data_size": 7936 00:18:25.955 } 00:18:25.955 ] 00:18:25.955 }' 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.955 08:56:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:26.231 [2024-09-28 08:56:04.138004] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.231 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 [2024-09-28 08:56:04.229570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.491 "name": "raid_bdev1", 00:18:26.491 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:26.491 "strip_size_kb": 0, 00:18:26.491 "state": "online", 00:18:26.491 "raid_level": "raid1", 00:18:26.491 "superblock": true, 00:18:26.491 "num_base_bdevs": 2, 00:18:26.491 "num_base_bdevs_discovered": 1, 00:18:26.491 "num_base_bdevs_operational": 1, 00:18:26.491 "base_bdevs_list": [ 00:18:26.491 { 00:18:26.491 "name": null, 00:18:26.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.491 "is_configured": false, 00:18:26.491 "data_offset": 0, 00:18:26.491 "data_size": 7936 00:18:26.491 }, 00:18:26.491 { 00:18:26.491 "name": "BaseBdev2", 00:18:26.491 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:26.491 "is_configured": true, 00:18:26.491 "data_offset": 256, 00:18:26.491 "data_size": 7936 00:18:26.491 } 00:18:26.491 ] 00:18:26.491 }' 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.491 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.751 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.751 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.751 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.751 [2024-09-28 08:56:04.616901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.751 [2024-09-28 08:56:04.632958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:26.751 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.751 08:56:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:26.751 [2024-09-28 08:56:04.634948] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.690 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.691 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.950 "name": "raid_bdev1", 00:18:27.950 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:27.950 "strip_size_kb": 0, 00:18:27.950 "state": "online", 00:18:27.950 "raid_level": "raid1", 00:18:27.950 "superblock": true, 00:18:27.950 "num_base_bdevs": 2, 00:18:27.950 "num_base_bdevs_discovered": 2, 00:18:27.950 "num_base_bdevs_operational": 2, 00:18:27.950 "process": { 00:18:27.950 "type": "rebuild", 00:18:27.950 "target": "spare", 00:18:27.950 "progress": { 00:18:27.950 "blocks": 2560, 00:18:27.950 "percent": 32 00:18:27.950 } 00:18:27.950 }, 00:18:27.950 "base_bdevs_list": [ 00:18:27.950 { 00:18:27.950 "name": "spare", 00:18:27.950 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:27.950 "is_configured": true, 00:18:27.950 "data_offset": 256, 00:18:27.950 "data_size": 7936 00:18:27.950 }, 00:18:27.950 { 00:18:27.950 "name": "BaseBdev2", 00:18:27.950 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:27.950 "is_configured": true, 00:18:27.950 "data_offset": 256, 00:18:27.950 "data_size": 7936 00:18:27.950 } 00:18:27.950 ] 00:18:27.950 }' 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.950 [2024-09-28 08:56:05.783632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.950 [2024-09-28 08:56:05.843475] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.950 [2024-09-28 08:56:05.843582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.950 [2024-09-28 08:56:05.843599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.950 [2024-09-28 08:56:05.843610] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.950 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.951 "name": "raid_bdev1", 00:18:27.951 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:27.951 "strip_size_kb": 0, 00:18:27.951 "state": "online", 00:18:27.951 "raid_level": "raid1", 00:18:27.951 "superblock": true, 00:18:27.951 "num_base_bdevs": 2, 00:18:27.951 "num_base_bdevs_discovered": 1, 00:18:27.951 "num_base_bdevs_operational": 1, 00:18:27.951 "base_bdevs_list": [ 00:18:27.951 { 00:18:27.951 "name": null, 00:18:27.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.951 "is_configured": false, 00:18:27.951 "data_offset": 0, 00:18:27.951 "data_size": 7936 00:18:27.951 }, 00:18:27.951 { 00:18:27.951 "name": "BaseBdev2", 00:18:27.951 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:27.951 "is_configured": true, 00:18:27.951 "data_offset": 256, 00:18:27.951 "data_size": 7936 00:18:27.951 } 00:18:27.951 ] 00:18:27.951 }' 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.951 08:56:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.518 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.518 "name": "raid_bdev1", 00:18:28.518 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:28.519 "strip_size_kb": 0, 00:18:28.519 "state": "online", 00:18:28.519 "raid_level": "raid1", 00:18:28.519 "superblock": true, 00:18:28.519 "num_base_bdevs": 2, 00:18:28.519 "num_base_bdevs_discovered": 1, 00:18:28.519 "num_base_bdevs_operational": 1, 00:18:28.519 "base_bdevs_list": [ 00:18:28.519 { 00:18:28.519 "name": null, 00:18:28.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.519 "is_configured": false, 00:18:28.519 "data_offset": 0, 00:18:28.519 "data_size": 7936 00:18:28.519 }, 00:18:28.519 { 00:18:28.519 "name": "BaseBdev2", 00:18:28.519 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:28.519 "is_configured": true, 00:18:28.519 "data_offset": 256, 00:18:28.519 "data_size": 7936 00:18:28.519 } 00:18:28.519 ] 00:18:28.519 }' 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.519 [2024-09-28 08:56:06.455105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.519 [2024-09-28 08:56:06.470231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.519 08:56:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:28.519 [2024-09-28 08:56:06.472264] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.901 "name": "raid_bdev1", 00:18:29.901 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:29.901 "strip_size_kb": 0, 00:18:29.901 "state": "online", 00:18:29.901 "raid_level": "raid1", 00:18:29.901 "superblock": true, 00:18:29.901 "num_base_bdevs": 2, 00:18:29.901 "num_base_bdevs_discovered": 2, 00:18:29.901 "num_base_bdevs_operational": 2, 00:18:29.901 "process": { 00:18:29.901 "type": "rebuild", 00:18:29.901 "target": "spare", 00:18:29.901 "progress": { 00:18:29.901 "blocks": 2560, 00:18:29.901 "percent": 32 00:18:29.901 } 00:18:29.901 }, 00:18:29.901 "base_bdevs_list": [ 00:18:29.901 { 00:18:29.901 "name": "spare", 00:18:29.901 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:29.901 "is_configured": true, 00:18:29.901 "data_offset": 256, 00:18:29.901 "data_size": 7936 00:18:29.901 }, 00:18:29.901 { 00:18:29.901 "name": "BaseBdev2", 00:18:29.901 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:29.901 "is_configured": true, 00:18:29.901 "data_offset": 256, 00:18:29.901 "data_size": 7936 00:18:29.901 } 00:18:29.901 ] 00:18:29.901 }' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:29.901 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.901 "name": "raid_bdev1", 00:18:29.901 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:29.901 "strip_size_kb": 0, 00:18:29.901 "state": "online", 00:18:29.901 "raid_level": "raid1", 00:18:29.901 "superblock": true, 00:18:29.901 "num_base_bdevs": 2, 00:18:29.901 "num_base_bdevs_discovered": 2, 00:18:29.901 "num_base_bdevs_operational": 2, 00:18:29.901 "process": { 00:18:29.901 "type": "rebuild", 00:18:29.901 "target": "spare", 00:18:29.901 "progress": { 00:18:29.901 "blocks": 2816, 00:18:29.901 "percent": 35 00:18:29.901 } 00:18:29.901 }, 00:18:29.901 "base_bdevs_list": [ 00:18:29.901 { 00:18:29.901 "name": "spare", 00:18:29.901 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:29.901 "is_configured": true, 00:18:29.901 "data_offset": 256, 00:18:29.901 "data_size": 7936 00:18:29.901 }, 00:18:29.901 { 00:18:29.901 "name": "BaseBdev2", 00:18:29.901 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:29.901 "is_configured": true, 00:18:29.901 "data_offset": 256, 00:18:29.901 "data_size": 7936 00:18:29.901 } 00:18:29.901 ] 00:18:29.901 }' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.901 08:56:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.841 "name": "raid_bdev1", 00:18:30.841 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:30.841 "strip_size_kb": 0, 00:18:30.841 "state": "online", 00:18:30.841 "raid_level": "raid1", 00:18:30.841 "superblock": true, 00:18:30.841 "num_base_bdevs": 2, 00:18:30.841 "num_base_bdevs_discovered": 2, 00:18:30.841 "num_base_bdevs_operational": 2, 00:18:30.841 "process": { 00:18:30.841 "type": "rebuild", 00:18:30.841 "target": "spare", 00:18:30.841 "progress": { 00:18:30.841 "blocks": 5632, 00:18:30.841 "percent": 70 00:18:30.841 } 00:18:30.841 }, 00:18:30.841 "base_bdevs_list": [ 00:18:30.841 { 00:18:30.841 "name": "spare", 00:18:30.841 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:30.841 "is_configured": true, 00:18:30.841 "data_offset": 256, 00:18:30.841 "data_size": 7936 00:18:30.841 }, 00:18:30.841 { 00:18:30.841 "name": "BaseBdev2", 00:18:30.841 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:30.841 "is_configured": true, 00:18:30.841 "data_offset": 256, 00:18:30.841 "data_size": 7936 00:18:30.841 } 00:18:30.841 ] 00:18:30.841 }' 00:18:30.841 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.100 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.101 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.101 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.101 08:56:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:31.670 [2024-09-28 08:56:09.593256] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:31.670 [2024-09-28 08:56:09.593327] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:31.670 [2024-09-28 08:56:09.593425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.930 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.190 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.190 "name": "raid_bdev1", 00:18:32.190 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:32.190 "strip_size_kb": 0, 00:18:32.190 "state": "online", 00:18:32.190 "raid_level": "raid1", 00:18:32.190 "superblock": true, 00:18:32.190 "num_base_bdevs": 2, 00:18:32.190 "num_base_bdevs_discovered": 2, 00:18:32.190 "num_base_bdevs_operational": 2, 00:18:32.190 "base_bdevs_list": [ 00:18:32.190 { 00:18:32.190 "name": "spare", 00:18:32.190 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:32.190 "is_configured": true, 00:18:32.190 "data_offset": 256, 00:18:32.190 "data_size": 7936 00:18:32.190 }, 00:18:32.190 { 00:18:32.190 "name": "BaseBdev2", 00:18:32.190 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:32.190 "is_configured": true, 00:18:32.190 "data_offset": 256, 00:18:32.190 "data_size": 7936 00:18:32.190 } 00:18:32.190 ] 00:18:32.190 }' 00:18:32.190 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.190 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:32.190 08:56:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.190 "name": "raid_bdev1", 00:18:32.190 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:32.190 "strip_size_kb": 0, 00:18:32.190 "state": "online", 00:18:32.190 "raid_level": "raid1", 00:18:32.190 "superblock": true, 00:18:32.190 "num_base_bdevs": 2, 00:18:32.190 "num_base_bdevs_discovered": 2, 00:18:32.190 "num_base_bdevs_operational": 2, 00:18:32.190 "base_bdevs_list": [ 00:18:32.190 { 00:18:32.190 "name": "spare", 00:18:32.190 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:32.190 "is_configured": true, 00:18:32.190 "data_offset": 256, 00:18:32.190 "data_size": 7936 00:18:32.190 }, 00:18:32.190 { 00:18:32.190 "name": "BaseBdev2", 00:18:32.190 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:32.190 "is_configured": true, 00:18:32.190 "data_offset": 256, 00:18:32.190 "data_size": 7936 00:18:32.190 } 00:18:32.190 ] 00:18:32.190 }' 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.190 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.449 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.450 "name": "raid_bdev1", 00:18:32.450 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:32.450 "strip_size_kb": 0, 00:18:32.450 "state": "online", 00:18:32.450 "raid_level": "raid1", 00:18:32.450 "superblock": true, 00:18:32.450 "num_base_bdevs": 2, 00:18:32.450 "num_base_bdevs_discovered": 2, 00:18:32.450 "num_base_bdevs_operational": 2, 00:18:32.450 "base_bdevs_list": [ 00:18:32.450 { 00:18:32.450 "name": "spare", 00:18:32.450 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:32.450 "is_configured": true, 00:18:32.450 "data_offset": 256, 00:18:32.450 "data_size": 7936 00:18:32.450 }, 00:18:32.450 { 00:18:32.450 "name": "BaseBdev2", 00:18:32.450 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:32.450 "is_configured": true, 00:18:32.450 "data_offset": 256, 00:18:32.450 "data_size": 7936 00:18:32.450 } 00:18:32.450 ] 00:18:32.450 }' 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.450 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 [2024-09-28 08:56:10.611009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.710 [2024-09-28 08:56:10.611042] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.710 [2024-09-28 08:56:10.611123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.710 [2024-09-28 08:56:10.611186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.710 [2024-09-28 08:56:10.611195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.710 [2024-09-28 08:56:10.678898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.710 [2024-09-28 08:56:10.678949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.710 [2024-09-28 08:56:10.678970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:32.710 [2024-09-28 08:56:10.678978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.710 [2024-09-28 08:56:10.681129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.710 [2024-09-28 08:56:10.681162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.710 [2024-09-28 08:56:10.681210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:32.710 [2024-09-28 08:56:10.681271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.710 [2024-09-28 08:56:10.681375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.710 spare 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.710 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 [2024-09-28 08:56:10.781267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:32.970 [2024-09-28 08:56:10.781296] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.970 [2024-09-28 08:56:10.781386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:32.970 [2024-09-28 08:56:10.781462] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:32.970 [2024-09-28 08:56:10.781470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:32.970 [2024-09-28 08:56:10.781542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.970 "name": "raid_bdev1", 00:18:32.970 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:32.970 "strip_size_kb": 0, 00:18:32.970 "state": "online", 00:18:32.970 "raid_level": "raid1", 00:18:32.970 "superblock": true, 00:18:32.970 "num_base_bdevs": 2, 00:18:32.970 "num_base_bdevs_discovered": 2, 00:18:32.970 "num_base_bdevs_operational": 2, 00:18:32.970 "base_bdevs_list": [ 00:18:32.970 { 00:18:32.970 "name": "spare", 00:18:32.970 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:32.970 "is_configured": true, 00:18:32.970 "data_offset": 256, 00:18:32.970 "data_size": 7936 00:18:32.970 }, 00:18:32.970 { 00:18:32.970 "name": "BaseBdev2", 00:18:32.970 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:32.970 "is_configured": true, 00:18:32.970 "data_offset": 256, 00:18:32.970 "data_size": 7936 00:18:32.970 } 00:18:32.970 ] 00:18:32.970 }' 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.970 08:56:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.540 "name": "raid_bdev1", 00:18:33.540 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:33.540 "strip_size_kb": 0, 00:18:33.540 "state": "online", 00:18:33.540 "raid_level": "raid1", 00:18:33.540 "superblock": true, 00:18:33.540 "num_base_bdevs": 2, 00:18:33.540 "num_base_bdevs_discovered": 2, 00:18:33.540 "num_base_bdevs_operational": 2, 00:18:33.540 "base_bdevs_list": [ 00:18:33.540 { 00:18:33.540 "name": "spare", 00:18:33.540 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:33.540 "is_configured": true, 00:18:33.540 "data_offset": 256, 00:18:33.540 "data_size": 7936 00:18:33.540 }, 00:18:33.540 { 00:18:33.540 "name": "BaseBdev2", 00:18:33.540 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:33.540 "is_configured": true, 00:18:33.540 "data_offset": 256, 00:18:33.540 "data_size": 7936 00:18:33.540 } 00:18:33.540 ] 00:18:33.540 }' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.540 [2024-09-28 08:56:11.385736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.540 "name": "raid_bdev1", 00:18:33.540 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:33.540 "strip_size_kb": 0, 00:18:33.540 "state": "online", 00:18:33.540 "raid_level": "raid1", 00:18:33.540 "superblock": true, 00:18:33.540 "num_base_bdevs": 2, 00:18:33.540 "num_base_bdevs_discovered": 1, 00:18:33.540 "num_base_bdevs_operational": 1, 00:18:33.540 "base_bdevs_list": [ 00:18:33.540 { 00:18:33.540 "name": null, 00:18:33.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.540 "is_configured": false, 00:18:33.540 "data_offset": 0, 00:18:33.540 "data_size": 7936 00:18:33.540 }, 00:18:33.540 { 00:18:33.540 "name": "BaseBdev2", 00:18:33.540 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:33.540 "is_configured": true, 00:18:33.540 "data_offset": 256, 00:18:33.540 "data_size": 7936 00:18:33.540 } 00:18:33.540 ] 00:18:33.540 }' 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.540 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.109 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.109 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.109 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.109 [2024-09-28 08:56:11.820996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.109 [2024-09-28 08:56:11.821167] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.109 [2024-09-28 08:56:11.821225] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.109 [2024-09-28 08:56:11.821300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.109 [2024-09-28 08:56:11.836662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:34.109 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.109 08:56:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:34.109 [2024-09-28 08:56:11.838681] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.046 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.046 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.046 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.047 "name": "raid_bdev1", 00:18:35.047 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:35.047 "strip_size_kb": 0, 00:18:35.047 "state": "online", 00:18:35.047 "raid_level": "raid1", 00:18:35.047 "superblock": true, 00:18:35.047 "num_base_bdevs": 2, 00:18:35.047 "num_base_bdevs_discovered": 2, 00:18:35.047 "num_base_bdevs_operational": 2, 00:18:35.047 "process": { 00:18:35.047 "type": "rebuild", 00:18:35.047 "target": "spare", 00:18:35.047 "progress": { 00:18:35.047 "blocks": 2560, 00:18:35.047 "percent": 32 00:18:35.047 } 00:18:35.047 }, 00:18:35.047 "base_bdevs_list": [ 00:18:35.047 { 00:18:35.047 "name": "spare", 00:18:35.047 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:35.047 "is_configured": true, 00:18:35.047 "data_offset": 256, 00:18:35.047 "data_size": 7936 00:18:35.047 }, 00:18:35.047 { 00:18:35.047 "name": "BaseBdev2", 00:18:35.047 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:35.047 "is_configured": true, 00:18:35.047 "data_offset": 256, 00:18:35.047 "data_size": 7936 00:18:35.047 } 00:18:35.047 ] 00:18:35.047 }' 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.047 08:56:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.047 [2024-09-28 08:56:12.999228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.306 [2024-09-28 08:56:13.047099] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.306 [2024-09-28 08:56:13.047157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.306 [2024-09-28 08:56:13.047170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.306 [2024-09-28 08:56:13.047180] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.306 "name": "raid_bdev1", 00:18:35.306 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:35.306 "strip_size_kb": 0, 00:18:35.306 "state": "online", 00:18:35.306 "raid_level": "raid1", 00:18:35.306 "superblock": true, 00:18:35.306 "num_base_bdevs": 2, 00:18:35.306 "num_base_bdevs_discovered": 1, 00:18:35.306 "num_base_bdevs_operational": 1, 00:18:35.306 "base_bdevs_list": [ 00:18:35.306 { 00:18:35.306 "name": null, 00:18:35.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.306 "is_configured": false, 00:18:35.306 "data_offset": 0, 00:18:35.306 "data_size": 7936 00:18:35.306 }, 00:18:35.306 { 00:18:35.306 "name": "BaseBdev2", 00:18:35.306 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:35.306 "is_configured": true, 00:18:35.306 "data_offset": 256, 00:18:35.306 "data_size": 7936 00:18:35.306 } 00:18:35.306 ] 00:18:35.306 }' 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.306 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.566 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.566 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.566 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.566 [2024-09-28 08:56:13.467083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.566 [2024-09-28 08:56:13.467181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.566 [2024-09-28 08:56:13.467215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:35.566 [2024-09-28 08:56:13.467244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.566 [2024-09-28 08:56:13.467460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.566 [2024-09-28 08:56:13.467521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.566 [2024-09-28 08:56:13.467592] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:35.566 [2024-09-28 08:56:13.467629] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.566 [2024-09-28 08:56:13.467685] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:35.566 [2024-09-28 08:56:13.467752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.566 [2024-09-28 08:56:13.482144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:35.566 spare 00:18:35.566 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.566 08:56:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:35.566 [2024-09-28 08:56:13.484186] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.504 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.765 "name": "raid_bdev1", 00:18:36.765 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:36.765 "strip_size_kb": 0, 00:18:36.765 "state": "online", 00:18:36.765 "raid_level": "raid1", 00:18:36.765 "superblock": true, 00:18:36.765 "num_base_bdevs": 2, 00:18:36.765 "num_base_bdevs_discovered": 2, 00:18:36.765 "num_base_bdevs_operational": 2, 00:18:36.765 "process": { 00:18:36.765 "type": "rebuild", 00:18:36.765 "target": "spare", 00:18:36.765 "progress": { 00:18:36.765 "blocks": 2560, 00:18:36.765 "percent": 32 00:18:36.765 } 00:18:36.765 }, 00:18:36.765 "base_bdevs_list": [ 00:18:36.765 { 00:18:36.765 "name": "spare", 00:18:36.765 "uuid": "177da78f-09c5-5d50-8a35-bb5aba6332b7", 00:18:36.765 "is_configured": true, 00:18:36.765 "data_offset": 256, 00:18:36.765 "data_size": 7936 00:18:36.765 }, 00:18:36.765 { 00:18:36.765 "name": "BaseBdev2", 00:18:36.765 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:36.765 "is_configured": true, 00:18:36.765 "data_offset": 256, 00:18:36.765 "data_size": 7936 00:18:36.765 } 00:18:36.765 ] 00:18:36.765 }' 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.765 [2024-09-28 08:56:14.648356] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.765 [2024-09-28 08:56:14.692232] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:36.765 [2024-09-28 08:56:14.692327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.765 [2024-09-28 08:56:14.692362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.765 [2024-09-28 08:56:14.692381] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.765 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.025 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.025 "name": "raid_bdev1", 00:18:37.025 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:37.025 "strip_size_kb": 0, 00:18:37.025 "state": "online", 00:18:37.025 "raid_level": "raid1", 00:18:37.025 "superblock": true, 00:18:37.025 "num_base_bdevs": 2, 00:18:37.025 "num_base_bdevs_discovered": 1, 00:18:37.025 "num_base_bdevs_operational": 1, 00:18:37.025 "base_bdevs_list": [ 00:18:37.025 { 00:18:37.025 "name": null, 00:18:37.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.025 "is_configured": false, 00:18:37.025 "data_offset": 0, 00:18:37.025 "data_size": 7936 00:18:37.025 }, 00:18:37.025 { 00:18:37.025 "name": "BaseBdev2", 00:18:37.025 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:37.025 "is_configured": true, 00:18:37.025 "data_offset": 256, 00:18:37.025 "data_size": 7936 00:18:37.025 } 00:18:37.025 ] 00:18:37.025 }' 00:18:37.025 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.025 08:56:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.286 "name": "raid_bdev1", 00:18:37.286 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:37.286 "strip_size_kb": 0, 00:18:37.286 "state": "online", 00:18:37.286 "raid_level": "raid1", 00:18:37.286 "superblock": true, 00:18:37.286 "num_base_bdevs": 2, 00:18:37.286 "num_base_bdevs_discovered": 1, 00:18:37.286 "num_base_bdevs_operational": 1, 00:18:37.286 "base_bdevs_list": [ 00:18:37.286 { 00:18:37.286 "name": null, 00:18:37.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.286 "is_configured": false, 00:18:37.286 "data_offset": 0, 00:18:37.286 "data_size": 7936 00:18:37.286 }, 00:18:37.286 { 00:18:37.286 "name": "BaseBdev2", 00:18:37.286 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:37.286 "is_configured": true, 00:18:37.286 "data_offset": 256, 00:18:37.286 "data_size": 7936 00:18:37.286 } 00:18:37.286 ] 00:18:37.286 }' 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.286 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.547 [2024-09-28 08:56:15.308007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:37.547 [2024-09-28 08:56:15.308101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.547 [2024-09-28 08:56:15.308142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:37.547 [2024-09-28 08:56:15.308169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.547 [2024-09-28 08:56:15.308360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.547 [2024-09-28 08:56:15.308401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.547 [2024-09-28 08:56:15.308477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:37.547 [2024-09-28 08:56:15.308513] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:37.547 [2024-09-28 08:56:15.308551] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:37.547 [2024-09-28 08:56:15.308595] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:37.547 BaseBdev1 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.547 08:56:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.486 "name": "raid_bdev1", 00:18:38.486 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:38.486 "strip_size_kb": 0, 00:18:38.486 "state": "online", 00:18:38.486 "raid_level": "raid1", 00:18:38.486 "superblock": true, 00:18:38.486 "num_base_bdevs": 2, 00:18:38.486 "num_base_bdevs_discovered": 1, 00:18:38.486 "num_base_bdevs_operational": 1, 00:18:38.486 "base_bdevs_list": [ 00:18:38.486 { 00:18:38.486 "name": null, 00:18:38.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.486 "is_configured": false, 00:18:38.486 "data_offset": 0, 00:18:38.486 "data_size": 7936 00:18:38.486 }, 00:18:38.486 { 00:18:38.486 "name": "BaseBdev2", 00:18:38.486 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:38.486 "is_configured": true, 00:18:38.486 "data_offset": 256, 00:18:38.486 "data_size": 7936 00:18:38.486 } 00:18:38.486 ] 00:18:38.486 }' 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.486 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.057 "name": "raid_bdev1", 00:18:39.057 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:39.057 "strip_size_kb": 0, 00:18:39.057 "state": "online", 00:18:39.057 "raid_level": "raid1", 00:18:39.057 "superblock": true, 00:18:39.057 "num_base_bdevs": 2, 00:18:39.057 "num_base_bdevs_discovered": 1, 00:18:39.057 "num_base_bdevs_operational": 1, 00:18:39.057 "base_bdevs_list": [ 00:18:39.057 { 00:18:39.057 "name": null, 00:18:39.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.057 "is_configured": false, 00:18:39.057 "data_offset": 0, 00:18:39.057 "data_size": 7936 00:18:39.057 }, 00:18:39.057 { 00:18:39.057 "name": "BaseBdev2", 00:18:39.057 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:39.057 "is_configured": true, 00:18:39.057 "data_offset": 256, 00:18:39.057 "data_size": 7936 00:18:39.057 } 00:18:39.057 ] 00:18:39.057 }' 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.057 [2024-09-28 08:56:16.937238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.057 [2024-09-28 08:56:16.937400] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.057 [2024-09-28 08:56:16.937459] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:39.057 request: 00:18:39.057 { 00:18:39.057 "base_bdev": "BaseBdev1", 00:18:39.057 "raid_bdev": "raid_bdev1", 00:18:39.057 "method": "bdev_raid_add_base_bdev", 00:18:39.057 "req_id": 1 00:18:39.057 } 00:18:39.057 Got JSON-RPC error response 00:18:39.057 response: 00:18:39.057 { 00:18:39.057 "code": -22, 00:18:39.057 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:39.057 } 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.057 08:56:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.996 08:56:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.255 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.255 "name": "raid_bdev1", 00:18:40.256 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:40.256 "strip_size_kb": 0, 00:18:40.256 "state": "online", 00:18:40.256 "raid_level": "raid1", 00:18:40.256 "superblock": true, 00:18:40.256 "num_base_bdevs": 2, 00:18:40.256 "num_base_bdevs_discovered": 1, 00:18:40.256 "num_base_bdevs_operational": 1, 00:18:40.256 "base_bdevs_list": [ 00:18:40.256 { 00:18:40.256 "name": null, 00:18:40.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.256 "is_configured": false, 00:18:40.256 "data_offset": 0, 00:18:40.256 "data_size": 7936 00:18:40.256 }, 00:18:40.256 { 00:18:40.256 "name": "BaseBdev2", 00:18:40.256 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:40.256 "is_configured": true, 00:18:40.256 "data_offset": 256, 00:18:40.256 "data_size": 7936 00:18:40.256 } 00:18:40.256 ] 00:18:40.256 }' 00:18:40.256 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.256 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.515 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.515 "name": "raid_bdev1", 00:18:40.515 "uuid": "54b0759b-8236-4e80-8a77-439c814e6912", 00:18:40.516 "strip_size_kb": 0, 00:18:40.516 "state": "online", 00:18:40.516 "raid_level": "raid1", 00:18:40.516 "superblock": true, 00:18:40.516 "num_base_bdevs": 2, 00:18:40.516 "num_base_bdevs_discovered": 1, 00:18:40.516 "num_base_bdevs_operational": 1, 00:18:40.516 "base_bdevs_list": [ 00:18:40.516 { 00:18:40.516 "name": null, 00:18:40.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.516 "is_configured": false, 00:18:40.516 "data_offset": 0, 00:18:40.516 "data_size": 7936 00:18:40.516 }, 00:18:40.516 { 00:18:40.516 "name": "BaseBdev2", 00:18:40.516 "uuid": "76b2a40d-915e-5e89-afee-129d8701eeb4", 00:18:40.516 "is_configured": true, 00:18:40.516 "data_offset": 256, 00:18:40.516 "data_size": 7936 00:18:40.516 } 00:18:40.516 ] 00:18:40.516 }' 00:18:40.516 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.516 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.516 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89070 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89070 ']' 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89070 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89070 00:18:40.775 killing process with pid 89070 00:18:40.775 Received shutdown signal, test time was about 60.000000 seconds 00:18:40.775 00:18:40.775 Latency(us) 00:18:40.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.775 =================================================================================================================== 00:18:40.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89070' 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89070 00:18:40.775 [2024-09-28 08:56:18.557433] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.775 [2024-09-28 08:56:18.557537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.775 [2024-09-28 08:56:18.557574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.775 [2024-09-28 08:56:18.557585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:40.775 08:56:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89070 00:18:41.036 [2024-09-28 08:56:18.869559] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.418 08:56:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:42.418 00:18:42.418 real 0m17.630s 00:18:42.418 user 0m22.894s 00:18:42.418 sys 0m1.768s 00:18:42.418 08:56:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.418 08:56:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 ************************************ 00:18:42.418 END TEST raid_rebuild_test_sb_md_interleaved 00:18:42.418 ************************************ 00:18:42.418 08:56:20 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:42.418 08:56:20 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:42.418 08:56:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89070 ']' 00:18:42.418 08:56:20 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89070 00:18:42.418 08:56:20 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:42.418 00:18:42.418 real 12m10.164s 00:18:42.418 user 16m7.504s 00:18:42.418 sys 2m3.117s 00:18:42.418 08:56:20 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.418 ************************************ 00:18:42.418 END TEST bdev_raid 00:18:42.418 ************************************ 00:18:42.418 08:56:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 08:56:20 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:42.418 08:56:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:42.418 08:56:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.418 08:56:20 -- common/autotest_common.sh@10 -- # set +x 00:18:42.418 ************************************ 00:18:42.418 START TEST spdkcli_raid 00:18:42.418 ************************************ 00:18:42.418 08:56:20 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:42.678 * Looking for test storage... 00:18:42.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.678 08:56:20 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.678 --rc genhtml_branch_coverage=1 00:18:42.678 --rc genhtml_function_coverage=1 00:18:42.678 --rc genhtml_legend=1 00:18:42.678 --rc geninfo_all_blocks=1 00:18:42.678 --rc geninfo_unexecuted_blocks=1 00:18:42.678 00:18:42.678 ' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.678 --rc genhtml_branch_coverage=1 00:18:42.678 --rc genhtml_function_coverage=1 00:18:42.678 --rc genhtml_legend=1 00:18:42.678 --rc geninfo_all_blocks=1 00:18:42.678 --rc geninfo_unexecuted_blocks=1 00:18:42.678 00:18:42.678 ' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.678 --rc genhtml_branch_coverage=1 00:18:42.678 --rc genhtml_function_coverage=1 00:18:42.678 --rc genhtml_legend=1 00:18:42.678 --rc geninfo_all_blocks=1 00:18:42.678 --rc geninfo_unexecuted_blocks=1 00:18:42.678 00:18:42.678 ' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:42.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.678 --rc genhtml_branch_coverage=1 00:18:42.678 --rc genhtml_function_coverage=1 00:18:42.678 --rc genhtml_legend=1 00:18:42.678 --rc geninfo_all_blocks=1 00:18:42.678 --rc geninfo_unexecuted_blocks=1 00:18:42.678 00:18:42.678 ' 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:42.678 08:56:20 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89752 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:42.678 08:56:20 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89752 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89752 ']' 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.678 08:56:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 [2024-09-28 08:56:20.699075] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:42.938 [2024-09-28 08:56:20.699185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89752 ] 00:18:42.938 [2024-09-28 08:56:20.861816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:43.197 [2024-09-28 08:56:21.108355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.197 [2024-09-28 08:56:21.108386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.137 08:56:22 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.137 08:56:22 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:44.137 08:56:22 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:44.138 08:56:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.138 08:56:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.138 08:56:22 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:44.138 08:56:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.138 08:56:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.138 08:56:22 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:44.138 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:44.138 ' 00:18:46.044 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:46.044 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:46.044 08:56:23 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:46.044 08:56:23 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.044 08:56:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.044 08:56:23 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:46.044 08:56:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.044 08:56:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.045 08:56:23 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:46.045 ' 00:18:46.982 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:46.982 08:56:24 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:46.982 08:56:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.982 08:56:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.241 08:56:24 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:47.241 08:56:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.241 08:56:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.241 08:56:24 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:47.241 08:56:24 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:47.500 08:56:25 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:47.759 08:56:25 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:47.759 08:56:25 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:47.760 08:56:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.760 08:56:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.760 08:56:25 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:47.760 08:56:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.760 08:56:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.760 08:56:25 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:47.760 ' 00:18:48.697 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:48.698 08:56:26 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:48.698 08:56:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.698 08:56:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.698 08:56:26 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:48.698 08:56:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.698 08:56:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.698 08:56:26 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:48.698 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:48.698 ' 00:18:50.080 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:50.080 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:50.340 08:56:28 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.340 08:56:28 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89752 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89752 ']' 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89752 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89752 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89752' 00:18:50.340 killing process with pid 89752 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89752 00:18:50.340 08:56:28 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89752 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89752 ']' 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89752 00:18:52.882 08:56:30 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89752 ']' 00:18:52.882 08:56:30 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89752 00:18:52.882 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89752) - No such process 00:18:52.882 08:56:30 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89752 is not found' 00:18:52.882 Process with pid 89752 is not found 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:52.882 08:56:30 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:52.882 00:18:52.882 real 0m10.534s 00:18:52.882 user 0m21.088s 00:18:52.882 sys 0m1.300s 00:18:52.882 08:56:30 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:52.882 08:56:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.882 ************************************ 00:18:52.882 END TEST spdkcli_raid 00:18:52.882 ************************************ 00:18:53.142 08:56:30 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.142 08:56:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:53.142 08:56:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.142 08:56:30 -- common/autotest_common.sh@10 -- # set +x 00:18:53.142 ************************************ 00:18:53.142 START TEST blockdev_raid5f 00:18:53.142 ************************************ 00:18:53.142 08:56:30 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:53.142 * Looking for test storage... 00:18:53.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:53.143 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:53.143 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:18:53.143 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.403 08:56:31 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.403 --rc genhtml_branch_coverage=1 00:18:53.403 --rc genhtml_function_coverage=1 00:18:53.403 --rc genhtml_legend=1 00:18:53.403 --rc geninfo_all_blocks=1 00:18:53.403 --rc geninfo_unexecuted_blocks=1 00:18:53.403 00:18:53.403 ' 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.403 --rc genhtml_branch_coverage=1 00:18:53.403 --rc genhtml_function_coverage=1 00:18:53.403 --rc genhtml_legend=1 00:18:53.403 --rc geninfo_all_blocks=1 00:18:53.403 --rc geninfo_unexecuted_blocks=1 00:18:53.403 00:18:53.403 ' 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.403 --rc genhtml_branch_coverage=1 00:18:53.403 --rc genhtml_function_coverage=1 00:18:53.403 --rc genhtml_legend=1 00:18:53.403 --rc geninfo_all_blocks=1 00:18:53.403 --rc geninfo_unexecuted_blocks=1 00:18:53.403 00:18:53.403 ' 00:18:53.403 08:56:31 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:53.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.403 --rc genhtml_branch_coverage=1 00:18:53.403 --rc genhtml_function_coverage=1 00:18:53.403 --rc genhtml_legend=1 00:18:53.403 --rc geninfo_all_blocks=1 00:18:53.403 --rc geninfo_unexecuted_blocks=1 00:18:53.403 00:18:53.403 ' 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:53.403 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90033 00:18:53.404 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:53.404 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:53.404 08:56:31 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90033 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90033 ']' 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.404 08:56:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.404 [2024-09-28 08:56:31.283739] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:53.404 [2024-09-28 08:56:31.283941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90033 ] 00:18:53.663 [2024-09-28 08:56:31.447025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.923 [2024-09-28 08:56:31.690010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.864 Malloc0 00:18:54.864 Malloc1 00:18:54.864 Malloc2 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.864 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.864 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.124 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:55.124 08:56:32 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.124 08:56:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:55.124 08:56:32 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "26531702-cf16-48aa-af19-f20aaf997ca7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "26531702-cf16-48aa-af19-f20aaf997ca7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "26531702-cf16-48aa-af19-f20aaf997ca7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "360ac5b0-2ae2-4d1d-a1b2-b6368003294d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e9c06157-890f-4c7c-aaa8-996e2684edcb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "001fab18-11d8-43d1-94e0-b222b62a3955",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:55.124 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:55.125 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:55.125 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:55.125 08:56:32 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90033 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90033 ']' 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90033 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90033 00:18:55.125 killing process with pid 90033 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90033' 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90033 00:18:55.125 08:56:32 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90033 00:18:58.485 08:56:35 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:58.485 08:56:35 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.485 08:56:35 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:58.485 08:56:35 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.485 08:56:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.485 ************************************ 00:18:58.485 START TEST bdev_hello_world 00:18:58.485 ************************************ 00:18:58.485 08:56:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:58.485 [2024-09-28 08:56:35.994629] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:58.485 [2024-09-28 08:56:35.994745] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90110 ] 00:18:58.485 [2024-09-28 08:56:36.157446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.485 [2024-09-28 08:56:36.405427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.054 [2024-09-28 08:56:36.994144] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:59.055 [2024-09-28 08:56:36.994194] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:59.055 [2024-09-28 08:56:36.994211] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:59.055 [2024-09-28 08:56:36.994688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:59.055 [2024-09-28 08:56:36.994830] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:59.055 [2024-09-28 08:56:36.994847] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:59.055 [2024-09-28 08:56:36.994892] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:59.055 00:18:59.055 [2024-09-28 08:56:36.994908] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:00.964 00:19:00.964 real 0m2.669s 00:19:00.964 user 0m2.200s 00:19:00.964 sys 0m0.349s 00:19:00.964 08:56:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:00.964 08:56:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:00.964 ************************************ 00:19:00.964 END TEST bdev_hello_world 00:19:00.964 ************************************ 00:19:00.964 08:56:38 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:00.964 08:56:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:00.964 08:56:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.964 08:56:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.964 ************************************ 00:19:00.964 START TEST bdev_bounds 00:19:00.964 ************************************ 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90152 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90152' 00:19:00.964 Process bdevio pid: 90152 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90152 00:19:00.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90152 ']' 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.964 08:56:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:00.964 [2024-09-28 08:56:38.742668] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:00.964 [2024-09-28 08:56:38.742780] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90152 ] 00:19:00.964 [2024-09-28 08:56:38.908243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.223 [2024-09-28 08:56:39.156814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.223 [2024-09-28 08:56:39.156995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.223 [2024-09-28 08:56:39.157000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.792 08:56:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.792 08:56:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:01.792 08:56:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:02.052 I/O targets: 00:19:02.052 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:02.052 00:19:02.052 00:19:02.052 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.052 http://cunit.sourceforge.net/ 00:19:02.052 00:19:02.052 00:19:02.052 Suite: bdevio tests on: raid5f 00:19:02.052 Test: blockdev write read block ...passed 00:19:02.052 Test: blockdev write zeroes read block ...passed 00:19:02.052 Test: blockdev write zeroes read no split ...passed 00:19:02.052 Test: blockdev write zeroes read split ...passed 00:19:02.312 Test: blockdev write zeroes read split partial ...passed 00:19:02.312 Test: blockdev reset ...passed 00:19:02.312 Test: blockdev write read 8 blocks ...passed 00:19:02.312 Test: blockdev write read size > 128k ...passed 00:19:02.312 Test: blockdev write read invalid size ...passed 00:19:02.312 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.312 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.312 Test: blockdev write read max offset ...passed 00:19:02.312 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.312 Test: blockdev writev readv 8 blocks ...passed 00:19:02.312 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.312 Test: blockdev writev readv block ...passed 00:19:02.312 Test: blockdev writev readv size > 128k ...passed 00:19:02.312 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.312 Test: blockdev comparev and writev ...passed 00:19:02.312 Test: blockdev nvme passthru rw ...passed 00:19:02.312 Test: blockdev nvme passthru vendor specific ...passed 00:19:02.312 Test: blockdev nvme admin passthru ...passed 00:19:02.312 Test: blockdev copy ...passed 00:19:02.312 00:19:02.312 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.312 suites 1 1 n/a 0 0 00:19:02.312 tests 23 23 23 0 0 00:19:02.312 asserts 130 130 130 0 n/a 00:19:02.312 00:19:02.312 Elapsed time = 0.569 seconds 00:19:02.312 0 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90152 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90152 ']' 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90152 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90152 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90152' 00:19:02.312 killing process with pid 90152 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90152 00:19:02.312 08:56:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90152 00:19:04.222 08:56:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:04.222 00:19:04.223 real 0m3.121s 00:19:04.223 user 0m7.255s 00:19:04.223 sys 0m0.470s 00:19:04.223 08:56:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.223 08:56:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:04.223 ************************************ 00:19:04.223 END TEST bdev_bounds 00:19:04.223 ************************************ 00:19:04.223 08:56:41 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:04.223 08:56:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:04.223 08:56:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.223 08:56:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.223 ************************************ 00:19:04.223 START TEST bdev_nbd 00:19:04.223 ************************************ 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90223 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90223 /var/tmp/spdk-nbd.sock 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90223 ']' 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.223 08:56:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:04.223 [2024-09-28 08:56:41.951994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:04.223 [2024-09-28 08:56:41.952217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.223 [2024-09-28 08:56:42.116853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.483 [2024-09-28 08:56:42.361308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:05.052 08:56:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:05.311 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.312 1+0 records in 00:19:05.312 1+0 records out 00:19:05.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465902 s, 8.8 MB/s 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:05.312 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:05.571 { 00:19:05.571 "nbd_device": "/dev/nbd0", 00:19:05.571 "bdev_name": "raid5f" 00:19:05.571 } 00:19:05.571 ]' 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:05.571 { 00:19:05.571 "nbd_device": "/dev/nbd0", 00:19:05.571 "bdev_name": "raid5f" 00:19:05.571 } 00:19:05.571 ]' 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.571 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.830 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.090 08:56:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:06.350 /dev/nbd0 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.350 1+0 records in 00:19:06.350 1+0 records out 00:19:06.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367134 s, 11.2 MB/s 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.350 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:06.610 { 00:19:06.610 "nbd_device": "/dev/nbd0", 00:19:06.610 "bdev_name": "raid5f" 00:19:06.610 } 00:19:06.610 ]' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:06.610 { 00:19:06.610 "nbd_device": "/dev/nbd0", 00:19:06.610 "bdev_name": "raid5f" 00:19:06.610 } 00:19:06.610 ]' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:06.610 256+0 records in 00:19:06.610 256+0 records out 00:19:06.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124406 s, 84.3 MB/s 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:06.610 256+0 records in 00:19:06.610 256+0 records out 00:19:06.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313732 s, 33.4 MB/s 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.610 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:06.870 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:07.130 08:56:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:07.390 malloc_lvol_verify 00:19:07.390 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:07.649 0f61d0f9-f4b6-482e-9535-c92b2ecad1c4 00:19:07.649 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:07.649 efc3bea6-867d-4663-a8ff-51dbdd1d1d17 00:19:07.649 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:07.914 /dev/nbd0 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:07.914 mke2fs 1.47.0 (5-Feb-2023) 00:19:07.914 Discarding device blocks: 0/4096 done 00:19:07.914 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:07.914 00:19:07.914 Allocating group tables: 0/1 done 00:19:07.914 Writing inode tables: 0/1 done 00:19:07.914 Creating journal (1024 blocks): done 00:19:07.914 Writing superblocks and filesystem accounting information: 0/1 done 00:19:07.914 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.914 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.915 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.915 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.915 08:56:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90223 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90223 ']' 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90223 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90223 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.175 killing process with pid 90223 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90223' 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90223 00:19:08.175 08:56:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90223 00:19:10.085 08:56:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:10.085 00:19:10.085 real 0m5.932s 00:19:10.085 user 0m7.718s 00:19:10.085 sys 0m1.367s 00:19:10.085 08:56:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:10.085 08:56:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 ************************************ 00:19:10.085 END TEST bdev_nbd 00:19:10.085 ************************************ 00:19:10.085 08:56:47 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:10.085 08:56:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:10.085 08:56:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:10.085 08:56:47 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:10.085 08:56:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:10.085 08:56:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.085 08:56:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 ************************************ 00:19:10.085 START TEST bdev_fio 00:19:10.085 ************************************ 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:10.085 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:10.085 08:56:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 ************************************ 00:19:10.085 START TEST bdev_fio_rw_verify 00:19:10.085 ************************************ 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:10.085 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:10.345 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:10.345 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:10.345 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:10.345 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.345 08:56:48 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:10.345 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:10.345 fio-3.35 00:19:10.345 Starting 1 thread 00:19:22.557 00:19:22.557 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90424: Sat Sep 28 08:56:59 2024 00:19:22.557 read: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(466MiB/10001msec) 00:19:22.557 slat (usec): min=16, max=137, avg=19.57, stdev= 2.61 00:19:22.557 clat (usec): min=12, max=375, avg=134.42, stdev=46.93 00:19:22.557 lat (usec): min=33, max=396, avg=153.99, stdev=47.35 00:19:22.557 clat percentiles (usec): 00:19:22.557 | 50.000th=[ 137], 99.000th=[ 225], 99.900th=[ 251], 99.990th=[ 289], 00:19:22.557 | 99.999th=[ 355] 00:19:22.557 write: IOPS=12.5k, BW=48.8MiB/s (51.1MB/s)(482MiB/9872msec); 0 zone resets 00:19:22.557 slat (usec): min=7, max=235, avg=17.36, stdev= 4.74 00:19:22.557 clat (usec): min=62, max=1702, avg=309.29, stdev=54.72 00:19:22.557 lat (usec): min=78, max=1853, avg=326.65, stdev=56.64 00:19:22.557 clat percentiles (usec): 00:19:22.557 | 50.000th=[ 310], 99.000th=[ 400], 99.900th=[ 914], 99.990th=[ 1532], 00:19:22.557 | 99.999th=[ 1680] 00:19:22.557 bw ( KiB/s): min=46096, max=52024, per=98.81%, avg=49355.79, stdev=1738.31, samples=19 00:19:22.557 iops : min=11524, max=13006, avg=12338.95, stdev=434.58, samples=19 00:19:22.557 lat (usec) : 20=0.01%, 50=0.01%, 100=14.16%, 250=40.04%, 500=45.61% 00:19:22.557 lat (usec) : 750=0.10%, 1000=0.05% 00:19:22.557 lat (msec) : 2=0.04% 00:19:22.557 cpu : usr=98.68%, sys=0.45%, ctx=42, majf=0, minf=9802 00:19:22.557 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.557 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.557 issued rwts: total=119307,123274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.557 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:22.558 00:19:22.558 Run status group 0 (all jobs): 00:19:22.558 READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=466MiB (489MB), run=10001-10001msec 00:19:22.558 WRITE: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=482MiB (505MB), run=9872-9872msec 00:19:23.127 ----------------------------------------------------- 00:19:23.127 Suppressions used: 00:19:23.127 count bytes template 00:19:23.127 1 7 /usr/src/fio/parse.c 00:19:23.127 279 26784 /usr/src/fio/iolog.c 00:19:23.127 1 8 libtcmalloc_minimal.so 00:19:23.127 1 904 libcrypto.so 00:19:23.127 ----------------------------------------------------- 00:19:23.127 00:19:23.127 00:19:23.127 real 0m12.860s 00:19:23.127 user 0m12.924s 00:19:23.127 sys 0m0.754s 00:19:23.127 08:57:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:23.127 08:57:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:23.127 ************************************ 00:19:23.127 END TEST bdev_fio_rw_verify 00:19:23.127 ************************************ 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "26531702-cf16-48aa-af19-f20aaf997ca7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "26531702-cf16-48aa-af19-f20aaf997ca7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "26531702-cf16-48aa-af19-f20aaf997ca7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "360ac5b0-2ae2-4d1d-a1b2-b6368003294d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e9c06157-890f-4c7c-aaa8-996e2684edcb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "001fab18-11d8-43d1-94e0-b222b62a3955",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:23.128 08:57:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:23.128 /home/vagrant/spdk_repo/spdk 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:23.128 00:19:23.128 real 0m13.168s 00:19:23.128 user 0m13.048s 00:19:23.128 sys 0m0.901s 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:23.128 08:57:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:23.128 ************************************ 00:19:23.128 END TEST bdev_fio 00:19:23.128 ************************************ 00:19:23.128 08:57:01 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.128 08:57:01 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.128 08:57:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:23.128 08:57:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:23.128 08:57:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.128 ************************************ 00:19:23.128 START TEST bdev_verify 00:19:23.128 ************************************ 00:19:23.128 08:57:01 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:23.388 [2024-09-28 08:57:01.193467] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:23.388 [2024-09-28 08:57:01.193580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90586 ] 00:19:23.388 [2024-09-28 08:57:01.360507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.648 [2024-09-28 08:57:01.607980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.648 [2024-09-28 08:57:01.608029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.218 Running I/O for 5 seconds... 00:19:29.353 10731.00 IOPS, 41.92 MiB/s 10842.00 IOPS, 42.35 MiB/s 10919.33 IOPS, 42.65 MiB/s 10929.75 IOPS, 42.69 MiB/s 10915.00 IOPS, 42.64 MiB/s 00:19:29.353 Latency(us) 00:19:29.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.353 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:29.353 Verification LBA range: start 0x0 length 0x2000 00:19:29.353 raid5f : 5.02 6326.15 24.71 0.00 0.00 30516.69 436.43 22665.73 00:19:29.353 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.353 Verification LBA range: start 0x2000 length 0x2000 00:19:29.353 raid5f : 5.02 4591.79 17.94 0.00 0.00 42032.08 216.43 30678.86 00:19:29.353 =================================================================================================================== 00:19:29.353 Total : 10917.94 42.65 0.00 0.00 35360.71 216.43 30678.86 00:19:31.265 00:19:31.265 real 0m7.735s 00:19:31.265 user 0m13.945s 00:19:31.265 sys 0m0.392s 00:19:31.265 08:57:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:31.265 08:57:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:31.265 ************************************ 00:19:31.265 END TEST bdev_verify 00:19:31.265 ************************************ 00:19:31.265 08:57:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:31.265 08:57:08 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:31.265 08:57:08 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:31.265 08:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.265 ************************************ 00:19:31.265 START TEST bdev_verify_big_io 00:19:31.265 ************************************ 00:19:31.265 08:57:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:31.265 [2024-09-28 08:57:08.998012] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:31.265 [2024-09-28 08:57:08.998146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90686 ] 00:19:31.265 [2024-09-28 08:57:09.163149] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.525 [2024-09-28 08:57:09.419674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.525 [2024-09-28 08:57:09.419759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.094 Running I/O for 5 seconds... 00:19:37.282 633.00 IOPS, 39.56 MiB/s 760.00 IOPS, 47.50 MiB/s 761.33 IOPS, 47.58 MiB/s 793.25 IOPS, 49.58 MiB/s 799.40 IOPS, 49.96 MiB/s 00:19:37.282 Latency(us) 00:19:37.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.282 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:37.282 Verification LBA range: start 0x0 length 0x200 00:19:37.282 raid5f : 5.10 447.88 27.99 0.00 0.00 7155237.83 257.57 315030.69 00:19:37.282 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:37.282 Verification LBA range: start 0x200 length 0x200 00:19:37.282 raid5f : 5.24 363.32 22.71 0.00 0.00 8714597.71 191.39 380967.35 00:19:37.282 =================================================================================================================== 00:19:37.282 Total : 811.20 50.70 0.00 0.00 7863631.69 191.39 380967.35 00:19:39.185 00:19:39.185 real 0m7.981s 00:19:39.185 user 0m14.427s 00:19:39.185 sys 0m0.390s 00:19:39.185 08:57:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.186 08:57:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.186 ************************************ 00:19:39.186 END TEST bdev_verify_big_io 00:19:39.186 ************************************ 00:19:39.186 08:57:16 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.186 08:57:16 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:39.186 08:57:16 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.186 08:57:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.186 ************************************ 00:19:39.186 START TEST bdev_write_zeroes 00:19:39.186 ************************************ 00:19:39.186 08:57:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.186 [2024-09-28 08:57:17.051402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:39.186 [2024-09-28 08:57:17.051508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90790 ] 00:19:39.444 [2024-09-28 08:57:17.219275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.702 [2024-09-28 08:57:17.477768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.270 Running I/O for 1 seconds... 00:19:41.209 29847.00 IOPS, 116.59 MiB/s 00:19:41.209 Latency(us) 00:19:41.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.209 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:41.209 raid5f : 1.01 29813.74 116.46 0.00 0.00 4280.48 1337.91 5809.52 00:19:41.209 =================================================================================================================== 00:19:41.209 Total : 29813.74 116.46 0.00 0.00 4280.48 1337.91 5809.52 00:19:43.115 00:19:43.115 real 0m3.706s 00:19:43.115 user 0m3.192s 00:19:43.115 sys 0m0.389s 00:19:43.115 08:57:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.115 08:57:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:43.115 ************************************ 00:19:43.116 END TEST bdev_write_zeroes 00:19:43.116 ************************************ 00:19:43.116 08:57:20 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.116 08:57:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:43.116 08:57:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.116 08:57:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.116 ************************************ 00:19:43.116 START TEST bdev_json_nonenclosed 00:19:43.116 ************************************ 00:19:43.116 08:57:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.116 [2024-09-28 08:57:20.827484] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:43.116 [2024-09-28 08:57:20.827592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90844 ] 00:19:43.116 [2024-09-28 08:57:20.989561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.374 [2024-09-28 08:57:21.235791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.374 [2024-09-28 08:57:21.235896] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:43.374 [2024-09-28 08:57:21.235924] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:43.374 [2024-09-28 08:57:21.235934] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:43.949 00:19:43.949 real 0m0.927s 00:19:43.949 user 0m0.660s 00:19:43.949 sys 0m0.161s 00:19:43.949 08:57:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.949 08:57:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:43.949 ************************************ 00:19:43.949 END TEST bdev_json_nonenclosed 00:19:43.949 ************************************ 00:19:43.949 08:57:21 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.949 08:57:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:43.949 08:57:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.949 08:57:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.949 ************************************ 00:19:43.949 START TEST bdev_json_nonarray 00:19:43.949 ************************************ 00:19:43.949 08:57:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.949 [2024-09-28 08:57:21.841480] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:43.949 [2024-09-28 08:57:21.841597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90874 ] 00:19:44.207 [2024-09-28 08:57:22.009328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.467 [2024-09-28 08:57:22.260676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.467 [2024-09-28 08:57:22.260791] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:44.467 [2024-09-28 08:57:22.260818] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:44.467 [2024-09-28 08:57:22.260828] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:44.727 00:19:44.727 real 0m0.939s 00:19:44.727 user 0m0.662s 00:19:44.727 sys 0m0.171s 00:19:44.727 08:57:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.727 08:57:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:44.727 ************************************ 00:19:44.727 END TEST bdev_json_nonarray 00:19:44.727 ************************************ 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:44.988 08:57:22 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:44.988 00:19:44.988 real 0m51.817s 00:19:44.988 user 1m7.930s 00:19:44.988 sys 0m5.859s 00:19:44.988 08:57:22 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.988 08:57:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 ************************************ 00:19:44.988 END TEST blockdev_raid5f 00:19:44.988 ************************************ 00:19:44.988 08:57:22 -- spdk/autotest.sh@194 -- # uname -s 00:19:44.988 08:57:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:44.988 08:57:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.988 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 08:57:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:44.988 08:57:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:44.988 08:57:22 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:44.988 08:57:22 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:44.988 08:57:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.988 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:44.988 08:57:22 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:44.988 08:57:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:44.988 08:57:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:44.988 08:57:22 -- common/autotest_common.sh@10 -- # set +x 00:19:47.527 INFO: APP EXITING 00:19:47.527 INFO: killing all VMs 00:19:47.527 INFO: killing vhost app 00:19:47.527 INFO: EXIT DONE 00:19:47.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.787 Waiting for block devices as requested 00:19:47.787 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.986 Cleaning 00:19:48.986 Removing: /var/run/dpdk/spdk0/config 00:19:48.986 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:48.986 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:48.986 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:48.986 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:48.986 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:48.986 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:48.986 Removing: /dev/shm/spdk_tgt_trace.pid56799 00:19:48.986 Removing: /var/run/dpdk/spdk0 00:19:48.986 Removing: /var/run/dpdk/spdk_pid56564 00:19:48.986 Removing: /var/run/dpdk/spdk_pid56799 00:19:48.986 Removing: /var/run/dpdk/spdk_pid57028 00:19:48.986 Removing: /var/run/dpdk/spdk_pid57139 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57188 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57327 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57345 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57555 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57672 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57779 00:19:48.987 Removing: /var/run/dpdk/spdk_pid57907 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58020 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58065 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58102 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58178 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58306 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58748 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58828 00:19:48.987 Removing: /var/run/dpdk/spdk_pid58908 00:19:49.247 Removing: /var/run/dpdk/spdk_pid58924 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59088 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59115 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59271 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59291 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59364 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59387 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59457 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59475 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59681 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59723 00:19:49.247 Removing: /var/run/dpdk/spdk_pid59812 00:19:49.247 Removing: /var/run/dpdk/spdk_pid61194 00:19:49.247 Removing: /var/run/dpdk/spdk_pid61400 00:19:49.247 Removing: /var/run/dpdk/spdk_pid61551 00:19:49.247 Removing: /var/run/dpdk/spdk_pid62200 00:19:49.247 Removing: /var/run/dpdk/spdk_pid62406 00:19:49.247 Removing: /var/run/dpdk/spdk_pid62552 00:19:49.247 Removing: /var/run/dpdk/spdk_pid63205 00:19:49.247 Removing: /var/run/dpdk/spdk_pid63531 00:19:49.247 Removing: /var/run/dpdk/spdk_pid63676 00:19:49.247 Removing: /var/run/dpdk/spdk_pid65068 00:19:49.247 Removing: /var/run/dpdk/spdk_pid65321 00:19:49.247 Removing: /var/run/dpdk/spdk_pid65472 00:19:49.247 Removing: /var/run/dpdk/spdk_pid66864 00:19:49.247 Removing: /var/run/dpdk/spdk_pid67117 00:19:49.247 Removing: /var/run/dpdk/spdk_pid67268 00:19:49.247 Removing: /var/run/dpdk/spdk_pid68654 00:19:49.247 Removing: /var/run/dpdk/spdk_pid69100 00:19:49.247 Removing: /var/run/dpdk/spdk_pid69246 00:19:49.247 Removing: /var/run/dpdk/spdk_pid70731 00:19:49.247 Removing: /var/run/dpdk/spdk_pid70990 00:19:49.247 Removing: /var/run/dpdk/spdk_pid71136 00:19:49.247 Removing: /var/run/dpdk/spdk_pid72632 00:19:49.247 Removing: /var/run/dpdk/spdk_pid72898 00:19:49.247 Removing: /var/run/dpdk/spdk_pid73044 00:19:49.247 Removing: /var/run/dpdk/spdk_pid74535 00:19:49.247 Removing: /var/run/dpdk/spdk_pid75028 00:19:49.247 Removing: /var/run/dpdk/spdk_pid75173 00:19:49.247 Removing: /var/run/dpdk/spdk_pid75317 00:19:49.247 Removing: /var/run/dpdk/spdk_pid75735 00:19:49.247 Removing: /var/run/dpdk/spdk_pid76454 00:19:49.247 Removing: /var/run/dpdk/spdk_pid76830 00:19:49.247 Removing: /var/run/dpdk/spdk_pid77526 00:19:49.247 Removing: /var/run/dpdk/spdk_pid77971 00:19:49.247 Removing: /var/run/dpdk/spdk_pid78732 00:19:49.247 Removing: /var/run/dpdk/spdk_pid79156 00:19:49.247 Removing: /var/run/dpdk/spdk_pid81119 00:19:49.247 Removing: /var/run/dpdk/spdk_pid81567 00:19:49.247 Removing: /var/run/dpdk/spdk_pid82007 00:19:49.247 Removing: /var/run/dpdk/spdk_pid84123 00:19:49.247 Removing: /var/run/dpdk/spdk_pid84609 00:19:49.247 Removing: /var/run/dpdk/spdk_pid85133 00:19:49.247 Removing: /var/run/dpdk/spdk_pid86206 00:19:49.247 Removing: /var/run/dpdk/spdk_pid86534 00:19:49.508 Removing: /var/run/dpdk/spdk_pid87473 00:19:49.508 Removing: /var/run/dpdk/spdk_pid87807 00:19:49.508 Removing: /var/run/dpdk/spdk_pid88747 00:19:49.508 Removing: /var/run/dpdk/spdk_pid89070 00:19:49.508 Removing: /var/run/dpdk/spdk_pid89752 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90033 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90110 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90152 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90409 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90586 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90686 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90790 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90844 00:19:49.508 Removing: /var/run/dpdk/spdk_pid90874 00:19:49.508 Clean 00:19:49.508 08:57:27 -- common/autotest_common.sh@1451 -- # return 0 00:19:49.508 08:57:27 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:49.508 08:57:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.508 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:19:49.508 08:57:27 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:49.508 08:57:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.508 08:57:27 -- common/autotest_common.sh@10 -- # set +x 00:19:49.769 08:57:27 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:49.769 08:57:27 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:49.769 08:57:27 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:49.769 08:57:27 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:49.769 08:57:27 -- spdk/autotest.sh@394 -- # hostname 00:19:49.769 08:57:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:49.769 geninfo: WARNING: invalid characters removed from testname! 00:20:16.331 08:57:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.269 08:57:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.172 08:57:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.082 08:57:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.621 08:58:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.528 08:58:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:27.447 08:58:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:27.447 08:58:05 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:20:27.447 08:58:05 -- common/autotest_common.sh@1681 -- $ lcov --version 00:20:27.447 08:58:05 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:20:27.447 08:58:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:20:27.447 08:58:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:27.447 08:58:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:27.447 08:58:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:27.447 08:58:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:27.447 08:58:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:27.447 08:58:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:27.447 08:58:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:27.447 08:58:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:27.447 08:58:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:27.447 08:58:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:27.447 08:58:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:27.447 08:58:05 -- scripts/common.sh@344 -- $ case "$op" in 00:20:27.447 08:58:05 -- scripts/common.sh@345 -- $ : 1 00:20:27.447 08:58:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:27.447 08:58:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.733 08:58:05 -- scripts/common.sh@365 -- $ decimal 1 00:20:27.733 08:58:05 -- scripts/common.sh@353 -- $ local d=1 00:20:27.733 08:58:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:27.733 08:58:05 -- scripts/common.sh@355 -- $ echo 1 00:20:27.733 08:58:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:27.733 08:58:05 -- scripts/common.sh@366 -- $ decimal 2 00:20:27.733 08:58:05 -- scripts/common.sh@353 -- $ local d=2 00:20:27.733 08:58:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:27.733 08:58:05 -- scripts/common.sh@355 -- $ echo 2 00:20:27.733 08:58:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:27.733 08:58:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:27.733 08:58:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:27.733 08:58:05 -- scripts/common.sh@368 -- $ return 0 00:20:27.733 08:58:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.733 08:58:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:20:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.733 --rc genhtml_branch_coverage=1 00:20:27.733 --rc genhtml_function_coverage=1 00:20:27.733 --rc genhtml_legend=1 00:20:27.733 --rc geninfo_all_blocks=1 00:20:27.733 --rc geninfo_unexecuted_blocks=1 00:20:27.733 00:20:27.733 ' 00:20:27.733 08:58:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:20:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.733 --rc genhtml_branch_coverage=1 00:20:27.733 --rc genhtml_function_coverage=1 00:20:27.733 --rc genhtml_legend=1 00:20:27.733 --rc geninfo_all_blocks=1 00:20:27.733 --rc geninfo_unexecuted_blocks=1 00:20:27.733 00:20:27.733 ' 00:20:27.733 08:58:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:20:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.733 --rc genhtml_branch_coverage=1 00:20:27.733 --rc genhtml_function_coverage=1 00:20:27.733 --rc genhtml_legend=1 00:20:27.733 --rc geninfo_all_blocks=1 00:20:27.733 --rc geninfo_unexecuted_blocks=1 00:20:27.733 00:20:27.733 ' 00:20:27.733 08:58:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:20:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.733 --rc genhtml_branch_coverage=1 00:20:27.733 --rc genhtml_function_coverage=1 00:20:27.733 --rc genhtml_legend=1 00:20:27.733 --rc geninfo_all_blocks=1 00:20:27.733 --rc geninfo_unexecuted_blocks=1 00:20:27.733 00:20:27.733 ' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.733 08:58:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:27.733 08:58:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:27.733 08:58:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.733 08:58:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.733 08:58:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.733 08:58:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.733 08:58:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.733 08:58:05 -- paths/export.sh@5 -- $ export PATH 00:20:27.733 08:58:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.733 08:58:05 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:27.733 08:58:05 -- common/autobuild_common.sh@479 -- $ date +%s 00:20:27.733 08:58:05 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727513885.XXXXXX 00:20:27.733 08:58:05 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727513885.ULpgru 00:20:27.733 08:58:05 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:20:27.733 08:58:05 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@495 -- $ get_config_params 00:20:27.733 08:58:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:27.733 08:58:05 -- common/autotest_common.sh@10 -- $ set +x 00:20:27.733 08:58:05 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:27.733 08:58:05 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:20:27.733 08:58:05 -- pm/common@17 -- $ local monitor 00:20:27.733 08:58:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:27.733 08:58:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:27.733 08:58:05 -- pm/common@25 -- $ sleep 1 00:20:27.733 08:58:05 -- pm/common@21 -- $ date +%s 00:20:27.733 08:58:05 -- pm/common@21 -- $ date +%s 00:20:27.733 08:58:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727513885 00:20:27.733 08:58:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727513885 00:20:27.733 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727513885_collect-cpu-load.pm.log 00:20:27.733 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727513885_collect-vmstat.pm.log 00:20:28.687 08:58:06 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:20:28.687 08:58:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:28.687 08:58:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:28.687 08:58:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:28.687 08:58:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:28.687 08:58:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:28.687 08:58:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:28.687 08:58:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:28.687 08:58:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:28.687 08:58:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:28.687 08:58:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:28.687 08:58:06 -- pm/common@44 -- $ pid=92380 00:20:28.687 08:58:06 -- pm/common@50 -- $ kill -TERM 92380 00:20:28.687 08:58:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:28.687 08:58:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:28.687 08:58:06 -- pm/common@44 -- $ pid=92382 00:20:28.687 08:58:06 -- pm/common@50 -- $ kill -TERM 92382 00:20:28.687 + [[ -n 5419 ]] 00:20:28.687 + sudo kill 5419 00:20:28.696 [Pipeline] } 00:20:28.712 [Pipeline] // timeout 00:20:28.717 [Pipeline] } 00:20:28.731 [Pipeline] // stage 00:20:28.736 [Pipeline] } 00:20:28.750 [Pipeline] // catchError 00:20:28.759 [Pipeline] stage 00:20:28.761 [Pipeline] { (Stop VM) 00:20:28.773 [Pipeline] sh 00:20:29.057 + vagrant halt 00:20:31.593 ==> default: Halting domain... 00:20:39.741 [Pipeline] sh 00:20:40.024 + vagrant destroy -f 00:20:42.563 ==> default: Removing domain... 00:20:42.576 [Pipeline] sh 00:20:42.860 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:42.870 [Pipeline] } 00:20:42.884 [Pipeline] // stage 00:20:42.890 [Pipeline] } 00:20:42.903 [Pipeline] // dir 00:20:42.909 [Pipeline] } 00:20:42.923 [Pipeline] // wrap 00:20:42.929 [Pipeline] } 00:20:42.942 [Pipeline] // catchError 00:20:42.952 [Pipeline] stage 00:20:42.954 [Pipeline] { (Epilogue) 00:20:42.967 [Pipeline] sh 00:20:43.252 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:47.459 [Pipeline] catchError 00:20:47.461 [Pipeline] { 00:20:47.474 [Pipeline] sh 00:20:47.759 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:47.759 Artifacts sizes are good 00:20:47.768 [Pipeline] } 00:20:47.782 [Pipeline] // catchError 00:20:47.793 [Pipeline] archiveArtifacts 00:20:47.800 Archiving artifacts 00:20:47.933 [Pipeline] cleanWs 00:20:47.956 [WS-CLEANUP] Deleting project workspace... 00:20:47.956 [WS-CLEANUP] Deferred wipeout is used... 00:20:47.969 [WS-CLEANUP] done 00:20:47.973 [Pipeline] } 00:20:48.001 [Pipeline] // stage 00:20:48.005 [Pipeline] } 00:20:48.013 [Pipeline] // node 00:20:48.017 [Pipeline] End of Pipeline 00:20:48.043 Finished: SUCCESS